cc_hash_table<int, char>

is a "map" mapping each int value to a char, but

cc_hash_table<int, null_type>

is a type that uniquely stores int values.

tree<int, size_t>

Stepping back a bit, and starting in from the beginning.

std::tr1::unordered_map<std::pair<std::string, unsigned long>, float, ...>

std::tr1::unordered_multimap<std::pair<std::string, unsigned long>, float, ...>

Put differently, the standards' non-unique mapping associative-containers are associative containers that map primary keys to linked lists that are embedded into the container. The graphic below shows again the two containers from the first graphic above, this time with the embedded linked lists of the grayed nodes marked explicitly.

These embedded linked lists have several disadvantages.

The underlying data structure embeds the linked lists according to its own consideration, which means that the search path for a value might include several different equivalent-key values. For example, the search path for the the black node in either of the first graphic, labels A or B, includes more than a single gray node.

The links of the linked lists are the underlying data structures' nodes, which typically are quite structured. In the case of tree-based containers (the grapic above, label B), each "link" is actually a node with three pointers (one to a parent and two to children), and a relatively-complicated iteration algorithm. The linked lists, therefore, can take up quite a lot of memory, and iterating over all values equal to a given key (through the return value of the standard library's

`equal_range`

) can be expensive.The primary key is stored multiply; this uses more memory.

Finally, the interface of this design excludes several useful underlying data structures. Of all the unordered self-organizing data structures, practically only collision-chaining hash tables can (efficiently) guarantee that equivalent-key values are stored consecutively.

The above reasons hold even when the ratio of secondary keys to primary keys (or average number of identical keys) is small, but when it is large, there are more severe problems:

The underlying data structures order the links inside each embedded linked-lists according to their internal considerations, which effectively means that each of the links is unordered. Irrespective of the underlying data structure, searching for a specific value can degrade to linear complexity.

Similarly to the above point, it is impossible to apply to the secondary keys considerations that apply to primary keys. For example, it is not possible to maintain secondary keys by sorted order.

While the interface "understands" that all equivalent-key values constitute a distinct list (through

`equal_range`

), the underlying data structure typically does not. This means that operations such as erasing from a tree-based container all values whose keys are equivalent to a a given key can be super-linear in the size of the tree; this is also true also for several other operations that target a specific list.

In this library, all associative containers map (or store) unique-key values. One can (1) map primary keys to secondary associative-containers (containers of secondary keys) or non-associative containers (2) map identical keys to a size-type representing the number of times they occur, or (3) any combination of (1) and (2). Instead of allowing multiple equivalent-key values, this library supplies associative containers based on underlying data structures that are suitable as secondary associative-containers.

In the figure below, labels A and B show the equivalent underlying data structures in this library, as mapped to the first graphic above. Labels A and B, respectively. Each shaded box represents some size-type or secondary associative-container.

In the first example above, then, one would use an associative
container mapping each user to an associative container which
maps each application id to a start time (see
`example/basic_multimap.cc`

); in the second
example, one would use an associative container mapping
each `int`

to some size-type indicating the
number of times it logically occurs
(see `example/basic_multiset.cc`

.

See the discussion in list-based container types for containers especially suited as secondary associative-containers.

point_const_iterator find(const_key_reference r_key) const; point_iterator find(const_key_reference r_key); std::pair<point_iterator,bool> insert(const_reference r_val);

Note that point-type iterators in self-organizing containers
(hash-based associative containers) lack movement
operators, such as `operator++`

- in fact, this
is the reason why this library differentiates from the standard C++ librarys
design on this point.

Typically, one can determine an iterator's movement
capabilities using
`std::iterator_traits<It>iterator_category`

,
which is a `struct`

indicating the iterator's
movement capabilities. Unfortunately, none of the standard predefined
categories reflect a pointer's *not* having any
movement capabilities whatsoever. Consequently,
`pb_ds`

adds a type
`trivial_iterator_tag`

(whose name is taken from
a concept in C++ standardese, which is the category of iterators
with no movement capabilities.) All other standard C++ library
tags, such as `forward_iterator_tag`

retain their
common use.

`basic_invalidation_guarantee`

corresponds to a basic guarantee that a point-type iterator, a found pointer, or a found reference, remains valid as long as the container object is not modified.`point_invalidation_guarantee`

corresponds to a guarantee that a point-type iterator, a found pointer, or a found reference, remains valid even if the container object is modified.`range_invalidation_guarantee`

corresponds to a guarantee that a range-type iterator remains valid even if the container object is modified.

To find the invalidation guarantee of a container, one can use

typename container_traits<Cntnr>::invalidation_guarantee

Note that this hierarchy corresponds to the logic it
represents: if a container has range-invalidation guarantees,
then it must also have find invalidation guarantees;
correspondingly, its invalidation guarantee (in this case
`range_invalidation_guarantee`

)
can be cast to its base class (in this case `point_invalidation_guarantee`

).
This means that this this hierarchy can be used easily using
standard metaprogramming techniques, by specializing on the
type of `invalidation_guarantee`

.

These types of problems were addressed, in a more general setting, in [biblio.meyers96more] - Item 2. In our opinion, an invalidation-guarantee hierarchy would solve these problems in all container types - not just associative containers.

template<typename Cntnr> void some_op_sequence(Cntnr &r_container) { ... }

then one needs to address the following questions in the body
of `some_op_sequence`

:

The remainder of this section explains these issues in detail.

This library contains a container tag hierarchy corresponding to the diagram below.

Given any container Cntnr, the tag of
the underlying data structure can be found via ```
typename
Cntnr::container_category
```

.

The collision-chaining hash-based container has the following declaration.

template< typename Key, typename Mapped, typename Hash_Fn = std::hash<Key>, typename Eq_Fn = std::equal_to<Key>, typename Comb_Hash_Fn = direct_mask_range_hashing<> typename Resize_Policy = default explained below. bool Store_Hash = false, typename Allocator = std::allocator<char> > class cc_hash_table;

The parameters have the following meaning:

The probing hash-based container has the following declaration.

template< typename Key, typename Mapped, typename Hash_Fn = std::hash<Key>, typename Eq_Fn = std::equal_to<Key>, typename Comb_Probe_Fn = direct_mask_range_hashing<> typename Probe_Fn = default explained below. typename Resize_Policy = default explained below. bool Store_Hash = false, typename Allocator = std::allocator<char> > class gp_hash_table;

The parameters are identical to those of the collision-chaining container, except for the following.

Let U be a domain (e.g., the integers, or the strings of 3 characters). A hash-table algorithm needs to map elements of U "uniformly" into the range [0,..., m - 1] (where m is a non-negative integral value, and is, in general, time varying). I.e., the algorithm needs a ranged-hash function

f : U × Z_{+} → Z_{+}

such that for any u in U ,

0 ≤ f(u, m) ≤ m - 1

and which has "good uniformity" properties (say [biblio.knuth98sorting].) One common solution is to use the composition of the hash function

h : U → Z_{+} ,

which maps elements of U into the non-negative integrals, and

g : Z_{+} × Z_{+} →
Z_{+},

which maps a non-negative hash value, and a non-negative
range upper-bound into a non-negative integral in the range
between 0 (inclusive) and the range upper bound (exclusive),
i.e., for any r in Z_{+},

0 ≤ g(r, m) ≤ m - 1

The resulting ranged-hash function, is

From the above, it is obvious that given g and h, f can always be composed (however the converse is not true). The standard's hash-based containers allow specifying a hash function, and use a hard-wired range-hashing function; the ranged-hash function is implicitly composed.

The above describes the case where a key is to be mapped into a single position within a hash table, e.g., in a collision-chaining table. In other cases, a key is to be mapped into a sequence of positions within a table, e.g., in a probing table. Similar terms apply in this case: the table requires a ranged probe function, mapping a key into a sequence of positions withing the table. This is typically achieved by composing a hash function mapping the key into a non-negative integral type, a probe function transforming the hash value into a sequence of hash values, and a range-hashing function transforming the sequence of hash values into a sequence of positions.

Some common choices for range-hashing functions are the division, multiplication, and middle-square methods ([biblio.knuth98sorting]), defined as

g(r, m) = ⌈ u/v ( a r mod v ) ⌉

and

g(r, m) = ⌈ u/v ( r^{2} mod v ) ⌉

respectively, for some positive integrals u and v (typically powers of 2), and some a. Each of these range-hashing functions works best for some different setting.

The division method (see above) is a very common choice. However, even this single method can be implemented in two very different ways. It is possible to implement using the low level % (modulo) operation (for any m), or the low level & (bit-mask) operation (for the case where m is a power of 2), i.e.,

and

respectively.

The % (modulo) implementation has the advantage that for m a prime far from a power of 2, g(r, m) is affected by all the bits of r (minimizing the chance of collision). It has the disadvantage of using the costly modulo operation. This method is hard-wired into SGI's implementation .

The & (bit-mask) implementation has the advantage of relying on the fast bit-wise and operation. It has the disadvantage that for g(r, m) is affected only by the low order bits of r. This method is hard-wired into Dinkumware's implementation.

In cases it is beneficial to allow the client to directly specify a ranged-hash hash function. It is true, that the writer of the ranged-hash function cannot rely on the values of m having specific numerical properties suitable for hashing (in the sense used in [biblio.knuth98sorting]), since the values of m are determined by a resize policy with possibly orthogonal considerations.

There are two cases where a ranged-hash function can be superior. The firs is when using perfect hashing: the second is when the values of m can be used to estimate the "general" number of distinct values required. This is described in the following.

Let

s = [ s_{0},..., s_{t - 1}]

be a string of t characters, each of which is from domain S. Consider the following ranged-hash function:

where a is some non-negative integral value. This is the standard string-hashing function used in SGI's implementation (with a = 5). Its advantage is that it takes into account all of the characters of the string.

Now assume that s is the string representation of a of a long DNA sequence (and so S = {'A', 'C', 'G', 'T'}). In this case, scanning the entire string might be prohibitively expensive. A possible alternative might be to use only the first k characters of the string, where

|S|^{k} ≥ m ,

i.e., using the hash function

requiring scanning over only

k = log_{4}( m )

characters.

Other more elaborate hash-functions might scan k characters starting at a random position (determined at each resize), or scanning k random positions (determined at each resize), i.e., using

f_{3}(s, m) = ∑ _{i =
r}0^{r0 + k - 1} s_{i}
a^{i} mod m ,

or

f_{4}(s, m) = ∑ _{i = 0}^{k -
1} s_{r}i a^{ri} mod
m ,

respectively, for r_{0},..., r_{k-1}
each in the (inclusive) range [0,...,t-1].

It should be noted that the above functions cannot be decomposed as per a ranged hash composed of hash and range hashing.

If `cc_hash_table`

's
hash-functor, `Hash_Fn`

is instantiated by `null_type`

, then `Comb_Hash_Fn`

is taken to be
a ranged-hash function. The graphic below shows an `insert`

sequence
diagram. The user inserts an element (point A), the container
transforms the key into a position using the combining functor
(points B and C).

Α_{min} ≤ (number of
stored elements) / (hash-table size) ≤
Α_{max}*load factor min max*

Denote the probability that a probe sequence of length
k appears in bin i by p_{i}, the
length of the probe sequence of bin i by
l_{i}, and assume uniform distribution. Then

P(l_{1} ≥ k) =

P(l_{1} ≥ α ( 1 + k / α - 1) ≤ (a)

e ^ ( - ( α ( k / α - 1 )^{2} ) /2)

where (a) follows from the Chernoff bound ([biblio.motwani95random]). To
calculate the probability that some bin contains a probe
sequence greater than k, we note that the
l_{i} are negatively-dependent
([biblio.dubhashi98neg])
. Let
I(.) denote the indicator function. Then

P ( ∑ _{i = 1}^{m}
I(l_{i} ≥ k) ≥ 1 ) =

P ( ∑ _{i = 1}^{m} I (
l_{i} ≥ k ) ≥ m p_{1} ( 1 + 1 / (m
p_{1}) - 1 ) ) ≤ (a)

e ^ ( ( - m p_{1} ( 1 / (m p_{1})
- 1 ) ^{2} ) / 2 ) ,

where (a) follows from the fact that the Chernoff bound can be applied to negatively-dependent variables ([biblio.dubhashi98neg]). Inserting the first probability equation into the second one, and equating with 1/m, we obtain

k ~ √ ( 2 α ln 2 m ln(m) ) ) .

cc_hash_table<typename Key, typename Mapped, ... typename Resize_Policy ...> : public Resize_Policy

In practice, a resize policy can be usually orthogonally
decomposed to a size policy and a trigger policy. Consequently,
the library contains a single class for instantiating a resize
policy: `hash_standard_resize_policy`

is parametrized by `Size_Policy`

and
`Trigger_Policy`

, derives `public`

ly from
both, and acts as a standard delegate ([biblio.gof])
to these policies.

The two graphics immediately below show sequence diagrams illustrating the interaction between the standard resize policy and its trigger and size policies, respectively.

The library includes the following instantiations of size and trigger policies:

Furthermore, the policies themselves are parametrized by
template arguments that determine the methods they support
(
[biblio.alexandrescu01modern]
shows techniques for doing so). `hash_standard_resize_policy`

is parametrized by `External_Size_Access`

that
determines whether it supports methods for querying the actual
size of the table or resizing it. `hash_load_check_resize_trigger`

is parametrized by `External_Load_Access`

that
determines whether it supports methods for querying or
modifying the loads. `cc_hash_max_collision_check_resize_trigger`

is parametrized by `External_Load_Access`

that
determines whether it supports methods for querying the
load.

Some operations, for example, resizing a container at
run time, or changing the load factors of a load-check trigger
policy, require the container itself to resize. As mentioned
above, the hash-based containers themselves do not contain
these types of methods, only their resize policies.
Consequently, there must be some mechanism for a resize policy
to manipulate the hash-based container. As the hash-based
container is a subclass of the resize policy, this is done
through virtual methods. Each hash-based container has a
`private`

`virtual`

method:

virtual void do_resize (size_type new_size);

which resizes the container. Implementations of
`Resize_Policy`

can export public methods for resizing
the container externally; these methods internally call
`do_resize`

to resize the table.

The tree-based container has the following declaration:

template< typename Key, typename Mapped, typename Cmp_Fn = std::less<Key>, typename Tag = rb_tree_tag, template< typename Const_Node_Iterator, typename Node_Iterator, typename Cmp_Fn_, typename Allocator_> class Node_Update = null_node_update, typename Allocator = std::allocator<char> > class tree;

The parameters have the following meaning:

std::make_pair(29,42)

In order to do so, each tree stores some metadata in each node, and maintains node invariants (see [biblio.clrs2001].) The first stores in each node the size of the sub-tree rooted at the node; the second stores at each node the maximal endpoint of the intervals at the sub-tree rooted at the node.

Supporting such trees is difficult for a number of reasons:

There must be a way to specify what a node's metadata should be (if any).

Various operations can invalidate node invariants. The graphic below shows how a right rotation, performed on A, results in B, with nodes x and y having corrupted invariants (the grayed nodes in C). The graphic shows how an insert, performed on D, results in E, with nodes x and y having corrupted invariants (the grayed nodes in F). It is not feasible to know outside the tree the effect of an operation on the nodes of the tree.

The search paths of standard associative containers are defined by comparisons between keys, and not through metadata.

It is not feasible to know in advance which methods trees can support. Besides the usual

`find`

method, the first tree can support a`find_by_order`

method, while the second can support an`overlaps`

method.

These problems are solved by a combination of two means: node iterators, and template-template node updater parameters.

const_node_iterator node_begin() const; node_iterator node_begin(); const_node_iterator node_end() const; node_iterator node_end();

`node_update`

(an instantiation of
`Node_Update`

) must define `metadata_type`

as
the type of metadata it requires. For order statistics,
e.g., `metadata_type`

might be `size_t`

.
The tree defines within each node a `metadata_type`

object.

`node_update`

must also define the following method
for restoring node invariants:

void operator()(node_iterator nd_it, const_node_iterator end_nd_it)

In this method, `nd_it`

is a
`node_iterator`

corresponding to a node whose
A) all descendants have valid invariants, and B) its own
invariants might be violated; `end_nd_it`

is
a `const_node_iterator`

corresponding to a
just-after-leaf node. This method should correct the node
invariants of the node pointed to by
`nd_it`

. For example, say node x in the
graphic below label A has an invalid invariant, but its' children,
y and z have valid invariants. After the invocation, all three
nodes should have valid invariants, as in label B.

When a tree operation might invalidate some node invariant,
it invokes this method in its `node_update`

base to
restore the invariant. For example, the graphic below shows
an `insert`

operation (point A); the tree performs some
operations, and calls the update functor three times (points B,
C, and D). (It is well known that any `insert`

,
`erase`

, `split`

or `join`

, can restore
all node invariants by a small number of node invariant updates ([biblio.clrs2001])
.

To complete the description of the scheme, three questions need to be answered:

How can a tree which supports order statistics define a method such as

`find_by_order`

?How can the node updater base access methods of the tree?

How can the following cyclic dependency be resolved?

`node_update`

is a base class of the tree, yet it uses node iterators defined in the tree (its child).

The first two questions are answered by the fact that
`node_update`

(an instantiation of
`Node_Update`

) is a *public* base class
of the tree. Consequently:

Any public methods of

`node_update`

are automatically methods of the tree ([biblio.alexandrescu01modern]). Thus an order-statistics node updater,`tree_order_statistics_node_update`

defines the`find_by_order`

method; any tree instantiated by this policy consequently supports this method as well.In C++, if a base class declares a method as

`virtual`

, it is`virtual`

in its subclasses. If`node_update`

needs to access one of the tree's methods, say the member function`end`

, it simply declares that method as`virtual`

abstract.

The cyclic dependency is solved through template-template
parameters. `Node_Update`

is parametrized by
the tree's node iterators, its comparison functor, and its
allocator type. Thus, instantiations of
`Node_Update`

have all information
required.

This library assumes that constructing a metadata object and
modifying it are exception free. Suppose that during some method,
say `insert`

, a metadata-related operation
(e.g., changing the value of a metadata) throws an exception. Ack!
Rolling back the method is unusually complex.

Previously, a distinction was made between redundant policies and null policies. Node invariants show a case where null policies are required.

Assume a regular tree is required, one which need not support order statistics or interval overlap queries. Seemingly, in this case a redundant policy - a policy which doesn't affect nodes' contents would suffice. This, would lead to the following drawbacks:

Each node would carry a useless metadata object, wasting space.

The tree cannot know if its

`Node_Update`

policy actually modifies a node's metadata (this is halting reducible). In the graphic below, assume the shaded node is inserted. The tree would have to traverse the useless path shown to the root, applying redundant updates all the way.

A null policy class, `null_node_update`

solves both these problems. The tree detects that node
invariants are irrelevant, and defines all accordingly.

The trie-based container has the following declaration:

template<typename Key, typename Mapped, typename Cmp_Fn = std::less<Key>, typename Tag = pat_trie_tag, template<typename Const_Node_Iterator, typename Node_Iterator, typename E_Access_Traits_, typename Allocator_> class Node_Update = null_node_update, typename Allocator = std::allocator<char> > class trie;

The parameters have the following meaning:

Following is a description of a (PATRICIA) trie (this implementation follows [biblio.okasaki98mereable] and [biblio.filliatre2000ptset]).

A (PATRICIA) trie is similar to a tree, but with the following differences:

It explicitly views keys as a sequence of elements. E.g., a trie can view a string as a sequence of characters; a trie can view a number as a sequence of bits.

It is not (necessarily) binary. Each node has fan-out n + 1, where n is the number of distinct elements.

It stores values only at leaf nodes.

Internal nodes have the properties that A) each has at least two children, and B) each shares the same prefix with any of its descendant.

A (PATRICIA) trie has some useful properties:

It can be configured to use large node fan-out, giving it very efficient find performance (albeit at insertion complexity and size).

It works well for common-prefix keys.

It can support efficiently queries such as which keys match a certain prefix. This is sometimes useful in file systems and routers, and for "type-ahead" aka predictive text matching on mobile devices.

The graphic below shows the scheme, as well as some predefined policies (which are explained below).

This library offers the following pre-defined trie node updating policies:

`trie_order_statistics_node_update`

supports order statistics.`trie_prefix_search_node_update`

supports searching for ranges that match a given prefix.`null_node_update`

is the null node updater.

The list-based container has the following declaration:

template<typename Key, typename Mapped, typename Eq_Fn = std::equal_to<Key>, typename Update_Policy = move_to_front_lu_policy<>, typename Allocator = std::allocator<char> > class list_update;

The parameters have the following meaning:

Since a list-based associative container does not order elements by keys, is it possible to order the list in some useful manner? Remarkably, many on-line competitive algorithms exist for reordering lists to reflect access prediction. (See [biblio.motwani95random] and [biblio.andrew04mtf]).

List-update algorithms reorder lists as elements are accessed. They try to determine, by the access history, which keys to move to the front of the list. Some of these algorithms require adding some metadata alongside each entry.

For example, in the graphic below label A shows the counter algorithm. Each node contains both a key and a count metadata (shown in bold). When an element is accessed (e.g. 6) its count is incremented, as shown in label B. If the count reaches some predetermined value, say 10, as shown in label C, the count is set to 0 and the node is moved to the front of the list, as in label D.

An instantiation of `Update_Policy`

must define
internally two operators:

update_metadata operator()(); bool operator()(update_metadata &);

The library contains two predefined implementations of
list-update policies. The first
is `lu_counter_policy`

, which implements the
counter algorithm described above. The second is
`lu_move_to_front_policy`

,
which unconditionally move an accessed element to the front of
the list. The latter type is very useful in this library,
since there is no need to associate metadata with each element.
(See [biblio.andrew04mtf]

The priority queue container has the following declaration:

template<typename Value_Type, typename Cmp_Fn = std::less<Value_Type>, typename Tag = pairing_heap_tag, typename Allocator = std::allocator<char > > class priority_queue;

The parameters have the following meaning:

The `Tag`

parameter specifies which underlying
data structure to use. Instantiating it by`pairing_heap_tag`

,`binary_heap_tag`

,
`binomial_heap_tag`

,
`rc_binomial_heap_tag`

,
or `thin_heap_tag`

,
specifies, respectively,
an underlying pairing heap ([biblio.fredman86pairing]),
binary heap ([biblio.clrs2001]),
binomial heap ([biblio.clrs2001]),
a binomial heap with a redundant binary counter ([biblio.maverik_lowerbounds]),
or a thin heap ([biblio.kt99fat_heaps]).

As mentioned in the tutorial,
`__gnu_pbds::priority_queue`

shares most of the
same interface with `std::priority_queue`

.
E.g. if `q`

is a priority queue of type
`Q`

, then `q.top()`

will
return the "largest" value in the container (according to
```
typename
Q::cmp_fn
```

). `__gnu_pbds::priority_queue`

has a larger (and very slightly different) interface than
`std::priority_queue`

, however, since typically
`push`

and `pop`

are deemed
insufficient for manipulating priority-queues.

Different settings require different priority-queue implementations which are described in later; see traits discusses ways to differentiate between the different traits of different implementations.

The following snippet demonstrates manipulating an arbitrary value:

// A priority queue of integers. priority_queue<int > p; // Insert some values into the priority queue. priority_queue<int >::point_iterator it = p.push(0); p.push(1); p.push(2); // Now modify a value. p.modify(it, 3); assert(p.top() == 3);

Roughly speaking, any value that is both pushed and popped from a priority queue must incur a logarithmic expense (in the amortized sense). Any priority queue implementation that would avoid this, would violate known bounds on comparison-based sorting (see [biblio.clrs2001] and [biblio.brodal96priority]).

Most implementations do
not differ in the asymptotic amortized complexity of
`push`

and `pop`

operations, but they differ in
the constants involved, in the complexity of other operations
(e.g., `modify`

), and in the worst-case
complexity of single operations. In general, the more
"structured" an implementation (i.e., the more internal
invariants it possesses) - the higher its amortized complexity
of `push`

and `pop`

operations.

This library implements different algorithms using a
single class: `priority_queue`

.
Instantiating the `Tag`

template parameter, "selects"
the implementation:

Instantiating

`Tag = binary_heap_tag`

creates a binary heap of the form in represented in the graphic with labels A1 or A2. The former is internally selected by priority_queue if`Value_Type`

is instantiated by a primitive type (e.g., an int); the latter is internally selected for all other types (e.g.,`std::string`

). This implementations is relatively unstructured, and so has good`push`

and`pop`

performance; it is the "best-in-kind" for primitive types, e.g., ints. Conversely, it has high worst-case performance, and can support only linear-time`modify`

and`erase`

operations.Instantiating

`Tag = pairing_heap_tag`

creates a pairing heap of the form in represented by label B in the graphic above. This implementations too is relatively unstructured, and so has good`push`

and`pop`

performance; it is the "best-in-kind" for non-primitive types, e.g.,`std:string`

s. It also has very good worst-case`push`

and`join`

performance (O(1)), but has high worst-case`pop`

complexity.Instantiating

`Tag = binomial_heap_tag`

creates a binomial heap of the form repsented by label B in the graphic above. This implementations is more structured than a pairing heap, and so has worse`push`

and`pop`

performance. Conversely, it has sub-linear worst-case bounds for`pop`

, e.g., and so it might be preferred in cases where responsiveness is important.Instantiating

`Tag = rc_binomial_heap_tag`

creates a binomial heap of the form represented in label B above, accompanied by a redundant counter which governs the trees. This implementations is therefore more structured than a binomial heap, and so has worse`push`

and`pop`

performance. Conversely, it guarantees O(1)`push`

complexity, and so it might be preferred in cases where the responsiveness of a binomial heap is insufficient.Instantiating

`Tag = thin_heap_tag`

creates a thin heap of the form represented by the label B in the graphic above. This implementations too is more structured than a pairing heap, and so has worse`push`

and`pop`

performance. Conversely, it has better worst-case and identical amortized complexities than a Fibonacci heap, and so might be more appropriate for some graph algorithms.

Of course, one can use any order-preserving associative
container as a priority queue, as in the graphic above label C, possibly by creating an adapter class
over the associative container (much as
`std::priority_queue`

can adapt `std::vector`

).
This has the advantage that no cross-referencing is necessary
at all; the priority queue itself is an associative container.
Most associative containers are too structured to compete with
priority queues in terms of `push`

and `pop`

performance.