This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [Bug libstdc++/54075] [4.7.1] unordered_map insert still slower than 4.6.2


Hi

Here is the patch I have been working on lately to add empty nodes at end of buckets.

Advantages:
- it greatly simplified local_iterator implementation. There is no more need to store in it the hash functor or the bucket inde and bucket count
- buckets are always pointing to empty node, thanks to it when removing a node from a bucket there is no need to check if it is the last one. We only need to update next bucket when current bucket is emptied.
- many usage of _M_bucket_index has been removed meaning less occasion to rely on hash functor.


Drawbacks:
- _M_rehash_aux might need the allocation of new empty nodes. The result is that this method do not offer a strong exception safety like it used to.
- To create a node I need to request the allocator twice, once for the node and an other time for the stored value. This is quite unusual so I don't know if it is a real problem


So now the performance:

54075.cc std::tr1::unordered_set without hash code cached 300000 Foo insertions 9r 8u 2s 13765616mem 0pf
54075.cc std::tr1::unordered_set without hash code cached 10 times insertion of 300000 Foo 19r 19u 0s 0mem 0pf
54075.cc std::tr1::unordered_set with hash code cached 300000 Foo insertions 14r 14u 0s 18561984mem 0pf
54075.cc std::tr1::unordered_set with hash code cached 10 times insertion of 300000 Foo 21r 21u 0s 0mem 0pf
54075.cc std::tr1::unordered_multiset without hash code cached 300000 Foo insertions 8r 8u 0s 13761968mem 0pf
54075.cc std::tr1::unordered_multiset without hash code cached 10 times insertion of 300000 Foo 109r 101u 8s 126686848mem 0pf
54075.cc std::tr1::unordered_multiset with hash code cached 300000 Foo insertions 95r 94u 0s 18561984mem 0pf
54075.cc std::tr1::unordered_multiset with hash code cached 10 times insertion of 300000 Foo 108r 103u 4s 174686864mem 0pf
54075.cc std::unordered_set without hash code cached 300000 Foo insertions 16r 14u 2s 30644144mem 0pf
54075.cc std::unordered_set without hash code cached 10 times insertion of 300000 Foo 37r 37u 0s 0mem 0pf
54075.cc std::unordered_set with hash code cached 300000 Foo insertions 14r 14u 0s 30653968mem 0pf
54075.cc std::unordered_set with hash code cached 10 times insertion of 300000 Foo 34r 34u 0s 0mem 0pf
54075.cc std::unordered_multiset without hash code cached 300000 Foo insertions 13r 14u 0s 30656464mem 0pf
54075.cc std::unordered_multiset without hash code cached 10 times insertion of 300000 Foo 37r 36u 0s 0mem 0pf
54075.cc std::unordered_multiset with hash code cached 300000 Foo insertions 14r 13u 0s 30657488mem 0pf
54075.cc std::unordered_multiset with hash code cached 10 times insertion of 300000 Foo 34r 34u 0s 0mem 0pf


As you can see performance are awful :-) The memory consumption increase is really too important. What I find especially surprising is:
54075.cc std::unordered_set without hash code cached 10 times insertion of 300000 Foo 37r 37u 0s 0mem 0pf


It is not better than without the empty nodes.

But I think I won't lose my time anymore on this approach and will rather prepare a small patch to change default behavior regarding the hash code. I agree with all your remarks Jonathan so I will use them for the patch.

François


On 11/19/2012 07:27 PM, Jonathan Wakely wrote:
On 18 November 2012 20:26, François Dumont wrote:
On 11/16/2012 02:51 AM, Jonathan Wakely wrote:
On 15 November 2012 21:00, François Dumont wrote:
On 11/14/2012 11:48 PM, Paolo Carlini wrote:
Can somebody remind me why *exactly* we have a condition having to do
with
the empty-ness of the functor?

Because when hash code is not cache we need to embed the hash functor into the local iterator implementation. So to minimize local implementation memory footprint we prefered to cache the hash code as soon as it is not empty.
What if sizeof(functor) == 4, it will still be smaller than the size_t
hash code, right? (At least for common 64-bit platforms.) And if
sizeof(functor) == sizeof(size_t) it doesn't increase the footprint to
store the functor.
(I see now my question was based on the mistaken understanding that
the choice was simply between storing a hash code or a functor, but
they're not stored in the same place so it's not a straightforward
comparison.  Thanks for clearing up my misunderstanding, I think I'm
starting to grok the whole design at last!)

I meant I wanted to minimize sizeof(local_iterator). iterators are most of
the time passed by value to it is better to make them small, no ?
If making them small had no other consequence, yes.  But if minimising
sizeof(local_iterator) means increasing sizeof(_Hash_node) it might be
a poor tradeoff.  Every user of unordered containers uses _Hash_node,
not everyone uses local iterators.  A single container might have
thousands of hash nodes, it's not likely a user will have thousands of
local iterators in memory at once (possible in some cases, but maybe
uncommon.)

So minimizing local iterator size at the expense of hash nodes might
be the wrong choice.

    And if sizeof(functor) > sizeof(size_t) then could
we store a single copy of the functor somewhere and store a pointer to
it in each node? That would have the same size as the hash code, so we
never increase the size of a node by more than sizeof(functor*), and
the memory footprint is bounded. Is that possible?

Do we have any performance numbers showing the impact of different
sizes of functor stored in the nodes?
No cause the functor is not stored in hashtable nodes but in local_iterator
when hash codes are not cached.
Yes, sorry.  So do we have any performance numbers showing the impact
of different sizes of functor stored in the local iterators?

If it is inefficient to store large functors in the local iterators,
is it worth considering storing the functor in the hashtable and
storing a pointer to it in the iterators?  That would mean the local
iterator size is bounded.  But this might be a distraction from the
main issue, the size of the _Hash_node might be far more important.

But anyway, surely non-empty functors are not the common case, so are
unlikely to be a performance problem for most users.
Agree

Do we have any idea why caching the hash code seems to be slower than
recalculating it?
Martin Cracauer in comment #31 is saying: "As you can see there also is a
reasonably catastrophic increase in minor page
faults". I think it is the source of the peformance issue.
Yes, I confirmed that yesterday, by changing __uset_traits to never
cache, then adding an unused size_t member to the _Hash_node type in
the non-caching case. Simply increasing the size of the node object
reproduces the exact same slowdown.

So if adding noexcept might improve generated
code but is also making behavior of unordered container worst users might
complains of not being able to have best performance in both cases.
Agreed.  So we don't want 'noexcept' to be the only criteria for
caching or not.  But the current criteria do seem wrong.

Let's look at those criteria:

We cache if the hash function can throw, that's necessary or we can't
make erase() noexcept.

We cache if the key is non-integral. This is the wrong choice for
Martin Cracauer's example, the key is not an integer, but the hash
function works on integers and is fast. It's likely there are lots of
user-defined types used as keys where the hash function works on an
integer member of a class type.  Don't we actually care about whether
the hash function is fast?  i.e. std::hash<int> is fast,
std::hash<std::string> is not.
Is that right?
We could easily have a user-specializable trait such that
hashing_is_fast<std::hash<int>> is true and
hashing_is_fast<std::hash<std::string>> is false, and users can
specialize it for their own hash functions.

Currently we must cache if the functor is final, because
_Local_iterator_base cannot derive from a final class.  That
requirement is unnecessary: we could use composition instead of
inheritance if the class is final (like the ebo helper classes do.)
Whether the hash code is cached should be independent of whether the
hash function is final.

We cache if the functor is not empty.  This reduces the
_Local_iterator_size but increases the _Hash_node size, which is the
wrong choice for Martin Cracauer's example.


Index: include/bits/hashtable.h
===================================================================
--- include/bits/hashtable.h	(revision 193766)
+++ include/bits/hashtable.h	(working copy)
@@ -213,6 +213,7 @@
       using __hash_code =  typename __hashtable_base::__hash_code;
       using __node_type = typename __hashtable_base::__node_type;
       using __node_base = typename __hashtable_base::__node_base;
+      using __node_empty = typename __hashtable_base::__node_empty;
       using __bucket_type = typename __hashtable_base::__bucket_type;
       using __ireturn_type = typename __hashtable_base::__ireturn_type;
       using __iconv_type = typename __hashtable_base::__iconv_type;
@@ -297,10 +298,14 @@
 				   const_local_iterator;
 
     private:
+      typedef typename _Alloc::template rebind<_Value>::other
+						_Value_allocator_type;
       typedef typename _Alloc::template rebind<__node_type>::other
-							_Node_allocator_type;
+						_Node_allocator_type;
+      typedef typename _Alloc::template rebind<__node_empty>::other
+						_Empty_node_allocator_type;
       typedef typename _Alloc::template rebind<__bucket_type>::other
-							_Bucket_allocator_type;
+						_Bucket_allocator_type;
 
       using __before_begin = __detail::_Before_begin<_Node_allocator_type>;
 
@@ -326,11 +331,17 @@
       _M_before_begin() const
       { return _M_bbegin._M_node; }
 
+      __node_empty*
+      _M_allocate_empty_node();
+
       template<typename... _Args>
 	__node_type*
 	_M_allocate_node(_Args&&... __args);
 
       void
+      _M_deallocate_empty_node(__node_empty* __n);
+
+      void
       _M_deallocate_node(__node_type* __n);
 
       // Deallocate the linked list of nodes pointed to by __n
@@ -352,6 +363,10 @@
       _M_begin() const
       { return static_cast<__node_type*>(_M_before_begin()._M_nxt); }
 
+      static __node_type*
+      _S_cast(__node_base* __n)
+      { return static_cast<__node_type*>(__n); }
+
     public:
       // Constructor, destructor, assignment, swap
       _Hashtable(size_type __bucket_hint,
@@ -500,30 +515,28 @@
 
       local_iterator
       begin(size_type __n)
-      { return local_iterator(_M_bucket_begin(__n), __n, _M_bucket_count); }
+      { return local_iterator(_M_bucket_begin(__n)); }
 
       local_iterator
       end(size_type __n)
-      { return local_iterator(nullptr, __n, _M_bucket_count); }
+      { return local_iterator(nullptr); }
 
       const_local_iterator
       begin(size_type __n) const
-      { return const_local_iterator(_M_bucket_begin(__n), __n,
-				    _M_bucket_count); }
+      { return const_local_iterator(_M_bucket_begin(__n)); }
 
       const_local_iterator
       end(size_type __n) const
-      { return const_local_iterator(nullptr, __n, _M_bucket_count); }
+      { return const_local_iterator(nullptr); }
 
       // DR 691.
       const_local_iterator
       cbegin(size_type __n) const
-      { return const_local_iterator(_M_bucket_begin(__n), __n,
-				    _M_bucket_count); }
+      { return const_local_iterator(_M_bucket_begin(__n)); }
 
       const_local_iterator
       cend(size_type __n) const
-      { return const_local_iterator(nullptr, __n, _M_bucket_count); }
+      { return const_local_iterator(nullptr); }
 
       float
       load_factor() const noexcept
@@ -580,7 +593,7 @@
       {
 	__node_base* __before_n = _M_find_before_node(__bkt, __key, __c);
 	if (__before_n)
-	  return static_cast<__node_type*>(__before_n->_M_nxt);
+	  return _S_cast(__before_n->_M_nxt);
 	return nullptr;
       }
 
@@ -589,9 +602,8 @@
       _M_insert_bucket_begin(size_type, __node_type*);
 
       // Remove the bucket first node
-      void
-      _M_remove_bucket_begin(size_type __bkt, __node_type* __next_n,
-			     size_type __next_bkt);
+      __node_type*
+      _M_remove_bucket_begin(size_type __bkt, __node_type* __next_n);
 
       // Get the node before __n in the bucket __bkt
       __node_base*
@@ -691,6 +703,30 @@
 	   typename _Alloc, typename _ExtractKey, typename _Equal,
 	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
 	   typename _Traits>
+    typename _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
+			_H1, _H2, _Hash, _RehashPolicy, _Traits>::__node_empty*
+    _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
+	       _H1, _H2, _Hash, _RehashPolicy, _Traits>::
+    _M_allocate_empty_node()
+    {
+      _Empty_node_allocator_type __alloc(_M_node_allocator());
+      __node_empty* __n = __alloc.allocate(1);
+      __try
+	{
+	  __alloc.construct(__n, nullptr);
+	  return __n;
+	}
+      __catch(...)
+	{
+	  __alloc.deallocate(__n, 1);
+	  __throw_exception_again;
+	}
+    }
+
+  template<typename _Key, typename _Value,
+	   typename _Alloc, typename _ExtractKey, typename _Equal,
+	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
+	   typename _Traits>
     template<typename... _Args>
       typename _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
 			  _H1, _H2, _Hash, _RehashPolicy, _Traits>::__node_type*
@@ -698,15 +734,29 @@
 		 _H1, _H2, _Hash, _RehashPolicy, _Traits>::
       _M_allocate_node(_Args&&... __args)
       {
-	__node_type* __n = _M_node_allocator().allocate(1);
+	_Value_allocator_type __alloc(_M_node_allocator());
+	auto __val = __alloc.allocate(1);
+	__node_type* __n = nullptr;
 	__try
 	  {
-	    _M_node_allocator().construct(__n, std::forward<_Args>(__args)...);
-	    return __n;
+	    __alloc.construct(__val, std::forward<_Args>(__args)...);
+	    __try
+	      {
+		__n = _M_node_allocator().allocate(1);
+		_M_node_allocator().construct(__n, __val);
+		return __n;
+	      }
+	    __catch(...)
+	      {
+		if (__n)
+		  _M_node_allocator().deallocate(__n, 1);
+		__alloc.destroy(__val);
+		__throw_exception_again;
+	      }
 	  }
 	__catch(...)
 	  {
-	    _M_node_allocator().deallocate(__n, 1);
+	    __alloc.deallocate(__val, 1);
 	    __throw_exception_again;
 	  }
       }
@@ -718,8 +768,25 @@
     void
     _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
 	       _H1, _H2, _Hash, _RehashPolicy, _Traits>::
+    _M_deallocate_empty_node(__node_empty* __n)
+    {
+      _Empty_node_allocator_type __alloc(_M_node_allocator());
+      __alloc.destroy(__n);
+      __alloc.deallocate(__n, 1);
+    }
+
+  template<typename _Key, typename _Value,
+	   typename _Alloc, typename _ExtractKey, typename _Equal,
+	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
+	   typename _Traits>
+    void
+    _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
+	       _H1, _H2, _Hash, _RehashPolicy, _Traits>::
     _M_deallocate_node(__node_type* __n)
     {
+      _Value_allocator_type __alloc(_M_node_allocator());
+      __alloc.destroy(__n->_M_pv);
+      __alloc.deallocate(__n->_M_pv, 1);
       _M_node_allocator().destroy(__n);
       _M_node_allocator().deallocate(__n, 1);
     }
@@ -737,7 +804,10 @@
 	{
 	  __node_type* __tmp = __n;
 	  __n = __n->_M_next();
-	  _M_deallocate_node(__tmp);
+	  if (__tmp->_M_pv)
+	    _M_deallocate_node(__tmp);
+	  else
+	    _M_deallocate_empty_node(__tmp);
 	}
     }
 
@@ -783,7 +853,7 @@
     _M_bucket_begin(size_type __bkt) const
     {
       __node_base* __n = _M_buckets[__bkt];
-      return __n ? static_cast<__node_type*>(__n->_M_nxt) : nullptr;
+      return __n ? _S_cast(__n->_M_nxt) : nullptr;
     }
 
   template<typename _Key, typename _Value,
@@ -872,21 +942,30 @@
 	  // First deal with the special first node pointed to by
 	  // _M_before_begin.
 	  const __node_type* __ht_n = __ht._M_begin();
-	  __node_type* __this_n = _M_allocate_node(__ht_n->_M_v);
+	  __node_type* __this_n = _M_allocate_node(__ht_n->_M_v());
 	  this->_M_copy_code(__this_n, __ht_n);
 	  _M_before_begin()._M_nxt = __this_n;
-	  _M_buckets[_M_bucket_index(__this_n)] = &_M_before_begin();
+	  size_type __prev_bkt = _M_bucket_index(__this_n);
+	  _M_buckets[__prev_bkt] = &_M_before_begin();
 
 	  // Then deal with other nodes.
 	  __node_base* __prev_n = __this_n;
 	  for (__ht_n = __ht_n->_M_next(); __ht_n; __ht_n = __ht_n->_M_next())
 	    {
-	      __this_n = _M_allocate_node(__ht_n->_M_v);
-	      __prev_n->_M_nxt = __this_n;
+	      if (!__ht_n->_M_pv)
+		continue;
+	      __this_n = _M_allocate_node(__ht_n->_M_v());
 	      this->_M_copy_code(__this_n, __ht_n);
 	      size_type __bkt = _M_bucket_index(__this_n);
 	      if (!_M_buckets[__bkt])
-		_M_buckets[__bkt] = __prev_n;
+		{
+		  // We move to a new bucket, add an empty node to mark end of
+		  // previous bucket.
+		  __prev_n->_M_nxt = _M_allocate_empty_node();
+		  __prev_n = __prev_n->_M_nxt;
+		  _M_buckets[__bkt] = __prev_n;
+		}
+	      __prev_n->_M_nxt = __this_n;
 	      __prev_n = __this_n;
 	    }
 	}
@@ -999,8 +1078,7 @@
     {
       __hash_code __code = this->_M_hash_code(__k);
       std::size_t __n = _M_bucket_index(__k, __code);
-      __node_type* __p = _M_find_node(__n, __k, __code);
-      return __p ? iterator(__p) : this->end();
+      return iterator(_M_find_node(__n, __k, __code));
     }
 
   template<typename _Key, typename _Value,
@@ -1016,8 +1094,7 @@
     {
       __hash_code __code = this->_M_hash_code(__k);
       std::size_t __n = _M_bucket_index(__k, __code);
-      __node_type* __p = _M_find_node(__n, __k, __code);
-      return __p ? const_iterator(__p) : this->end();
+      return const_iterator(_M_find_node(__n, __k, __code));
     }
 
   template<typename _Key, typename _Value,
@@ -1047,7 +1124,7 @@
 	    // found a non-equivalent value after an equivalent one it
 	    // means that we won't find any more equivalent values.
 	    break;
-	  if (!__p->_M_nxt || _M_bucket_index(__p->_M_next()) != __n)
+	  if (!__p->_M_nxt || !__p->_M_next()->_M_pv)
 	    break;
 	}
       return __result;
@@ -1076,7 +1153,7 @@
       if (__p)
 	{
 	  __node_type* __p1 = __p->_M_next();
-	  while (__p1 && _M_bucket_index(__p1) == __n
+	  while (__p1 && __p1->_M_pv
 		 && this->_M_equals(__k, __code, __p1))
 	    __p1 = __p1->_M_next();
 
@@ -1109,7 +1186,7 @@
       if (__p)
 	{
 	  __node_type* __p1 = __p->_M_next();
-	  while (__p1 && _M_bucket_index(__p1) == __n
+	  while (__p1 && __p1->_M_pv
 		 && this->_M_equals(__k, __code, __p1))
 	    __p1 = __p1->_M_next();
 
@@ -1136,12 +1213,12 @@
       __node_base* __prev_p = _M_buckets[__n];
       if (!__prev_p)
 	return nullptr;
-      __node_type* __p = static_cast<__node_type*>(__prev_p->_M_nxt);
+      __node_type* __p = _S_cast(__prev_p->_M_nxt);
       for (;; __p = __p->_M_next())
 	{
 	  if (this->_M_equals(__k, __code, __p))
 	    return __prev_p;
-	  if (!(__p->_M_nxt) || _M_bucket_index(__p->_M_next()) != __n)
+	  if (!(__p->_M_nxt) || !__p->_M_next()->_M_pv)
 	    break;
 	  __prev_p = __p;
 	}
@@ -1169,12 +1246,19 @@
 	  // The bucket is empty, the new node is inserted at the
 	  // beginning of the singly-linked list and the bucket will
 	  // contain _M_before_begin pointer.
-	  __node->_M_nxt = _M_before_begin()._M_nxt;
+	  if (_M_before_begin()._M_nxt)
+	    {
+	      // Add an empty node to mark end of bucket
+	      __node_empty* __empty_node = _M_allocate_empty_node();
+	      __empty_node->_M_nxt = _M_before_begin()._M_nxt;
+	      __node->_M_nxt = __empty_node;
+
+	      // We must update former begin bucket that is pointing to
+	      // _M_before_begin.
+	      _M_buckets[_M_bucket_index(_S_cast(_M_before_begin()._M_nxt))]
+		= __empty_node;
+	    }
 	  _M_before_begin()._M_nxt = __node;
-	  if (__node->_M_nxt)
-	    // We must update former begin bucket that is pointing to
-	    // _M_before_begin.
-	    _M_buckets[_M_bucket_index(__node->_M_next())] = __node;
 	  _M_buckets[__bkt] = &_M_before_begin();
 	}
     }
@@ -1183,24 +1267,40 @@
 	   typename _Alloc, typename _ExtractKey, typename _Equal,
 	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
 	   typename _Traits>
-    void
+    typename _Hashtable<_Key, _Value, _Alloc, _ExtractKey,
+			_Equal, _H1, _H2, _Hash, _RehashPolicy,
+			_Traits>::__node_type*
     _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
 	       _H1, _H2, _Hash, _RehashPolicy, _Traits>::
-    _M_remove_bucket_begin(size_type __bkt, __node_type* __next,
-			   size_type __next_bkt)
+    _M_remove_bucket_begin(size_type __bkt, __node_type* __next)
     {
-      if (!__next || __next_bkt != __bkt)
+      if (!__next || !__next->_M_pv)
 	{
 	  // Bucket is now empty
-	  // First update next bucket if any
 	  if (__next)
-	    _M_buckets[__next_bkt] = _M_buckets[__bkt];
+	    {
+	      if (__next->_M_nxt)
+		{
+		  // First, update next bucket.
+		  _M_buckets[_M_bucket_index(__next->_M_next())]
+		    = _M_buckets[__bkt];
+		}
 
-	  // Second update before begin node if necessary
-	  if (&_M_before_begin() == _M_buckets[__bkt])
-	    _M_before_begin()._M_nxt = __next;
+	      // Second, update before begin node.
+	      if (&_M_before_begin() == _M_buckets[__bkt])
+		_M_before_begin()._M_nxt = __next->_M_nxt;
+
+	      // Last, get rid of now useless empty node.
+	      __node_type* __empty_node = __next;
+	      __next = __next->_M_next();
+	      _M_deallocate_empty_node(__empty_node);
+	    }
+	  else
+	    if (&_M_before_begin() == _M_buckets[__bkt])
+	      _M_before_begin()._M_nxt = nullptr;
 	  _M_buckets[__bkt] = nullptr;
 	}
+      return __next;
     }
 
   template<typename _Key, typename _Value,
@@ -1235,7 +1335,7 @@
       {
 	// First build the node to get access to the hash code
 	__node_type* __node = _M_allocate_node(std::forward<_Args>(__args)...);
-	const key_type& __k = this->_M_extract()(__node->_M_v);
+	const key_type& __k = this->_M_extract()(__node->_M_v());
 	__hash_code __code;
 	__try
 	  {
@@ -1278,7 +1378,7 @@
 	__hash_code __code;
 	__try
 	  {
-	    __code = this->_M_hash_code(this->_M_extract()(__node->_M_v));
+	    __code = this->_M_hash_code(this->_M_extract()(__node->_M_v()));
 	  }
 	__catch(...)
 	  {
@@ -1310,7 +1410,7 @@
 	  if (__do_rehash.first)
 	    {
 	      _M_rehash(__do_rehash.second, __saved_state);
-	      __bkt = _M_bucket_index(this->_M_extract()(__node->_M_v), __code);
+	      __bkt = _M_bucket_index(this->_M_extract()(__node->_M_v()), __code);
 	    }
 
 	  this->_M_store_code(__node, __code);
@@ -1350,7 +1450,7 @@
 	    _M_rehash(__do_rehash.second, __saved_state);
 
 	  this->_M_store_code(__node, __code);
-	  const key_type& __k = this->_M_extract()(__node->_M_v);
+	  const key_type& __k = this->_M_extract()(__node->_M_v());
 	  size_type __bkt = _M_bucket_index(__k, __code);
 
 	  // Find the node before an equivalent one.
@@ -1459,17 +1559,10 @@
     _M_erase(size_type __bkt, __node_base* __prev_n, __node_type* __n)
     {
       if (__prev_n == _M_buckets[__bkt])
-	_M_remove_bucket_begin(__bkt, __n->_M_next(),
-	   __n->_M_nxt ? _M_bucket_index(__n->_M_next()) : 0);
-      else if (__n->_M_nxt)
-	{
-	  size_type __next_bkt = _M_bucket_index(__n->_M_next());
-	  if (__next_bkt != __bkt)
-	    _M_buckets[__next_bkt] = __prev_n;
-	}
-
-      __prev_n->_M_nxt = __n->_M_nxt;
-      iterator __result(__n->_M_next());
+	__prev_n->_M_nxt = _M_remove_bucket_begin(__bkt, __n->_M_next());
+      else
+	__prev_n->_M_nxt = __n->_M_nxt;
+      iterator __result(_S_cast(__prev_n->_M_nxt));
       _M_deallocate_node(__n);
       --_M_element_count;
 
@@ -1496,7 +1589,7 @@
 	return 0;
 
       // We found a matching node, erase it.
-      __node_type* __n = static_cast<__node_type*>(__prev_n->_M_nxt);
+      __node_type* __n = _S_cast(__prev_n->_M_nxt);
       _M_erase(__bkt, __prev_n, __n);
       return 1;
     }
@@ -1526,17 +1619,16 @@
       // We use one loop to find all matching nodes and another to deallocate
       // them so that the key stays valid during the first loop. It might be
       // invalidated indirectly when destroying nodes.
-      __node_type* __n = static_cast<__node_type*>(__prev_n->_M_nxt);
+      __node_type* __n = _S_cast(__prev_n->_M_nxt);
       __node_type* __n_last = __n;
       std::size_t __n_last_bkt = __bkt;
       do
 	{
 	  __n_last = __n_last->_M_next();
-	  if (!__n_last)
+	  if (!__n_last || !__n_last->_M_pv)
 	    break;
-	  __n_last_bkt = _M_bucket_index(__n_last);
 	}
-      while (__n_last_bkt == __bkt && this->_M_equals(__k, __code, __n_last));
+      while (this->_M_equals(__k, __code, __n_last));
 
       // Deallocate nodes.
       size_type __result = 0;
@@ -1551,9 +1643,7 @@
       while (__n != __n_last);
 
       if (__prev_n == _M_buckets[__bkt])
-	_M_remove_bucket_begin(__bkt, __n_last, __n_last_bkt);
-      else if (__n_last && __n_last_bkt != __bkt)
-	_M_buckets[__n_last_bkt] = __prev_n;
+	__n_last = _M_remove_bucket_begin(__bkt, __n_last);
       __prev_n->_M_nxt = __n_last;
       return __result;
     }
@@ -1577,32 +1667,40 @@
       std::size_t __bkt = _M_bucket_index(__n);
 
       __node_base* __prev_n = _M_get_previous_node(__bkt, __n);
-      bool __is_bucket_begin = __n == _M_bucket_begin(__bkt);
-      std::size_t __n_bkt = __bkt;
+      bool __is_bucket_begin = __prev_n == _M_buckets[__bkt];
       for (;;)
 	{
+	  // First loop to remove current bucket nodes.
 	  do
 	    {
 	      __node_type* __tmp = __n;
 	      __n = __n->_M_next();
 	      _M_deallocate_node(__tmp);
 	      --_M_element_count;
-	      if (!__n)
+	      if (!__n || !__n->_M_pv)
 		break;
-	      __n_bkt = _M_bucket_index(__n);
 	    }
-	  while (__n != __last_n && __n_bkt == __bkt);
+	  while (__n != __last_n);
+
+	  // We finish to treat the bucket or we reach last.
 	  if (__is_bucket_begin)
-	    _M_remove_bucket_begin(__bkt, __n, __n_bkt);
+	    {
+	      // Check if the bucket is not empty
+	      __n = _M_remove_bucket_begin(__bkt, __n);
+	      __prev_n->_M_nxt = __n;
+	    }
+	  else if (__n && !__n->_M_pv)
+	    {
+	      // Next needs to be the empty node we reached.
+	      __prev_n->_M_nxt = __n;
+	      __n = __n->_M_next();
+	    }
 	  if (__n == __last_n)
 	    break;
 	  __is_bucket_begin = true;
-	  __bkt = __n_bkt;
+	  __bkt = _M_bucket_index(__n);
 	}
 
-      if (__n && (__n_bkt != __bkt || __is_bucket_begin))
-	_M_buckets[__n_bkt] = __prev_n;
-      __prev_n->_M_nxt = __n;
       return iterator(__n);
     }
 
@@ -1675,22 +1773,50 @@
 	       _H1, _H2, _Hash, _RehashPolicy, _Traits>::
     _M_rehash_aux(size_type __n, std::true_type)
     {
+      std::size_t __bbegin_bkt = 0;
+      __node_empty* __empty_nodes = nullptr;
       __bucket_type* __new_buckets = _M_allocate_buckets(__n);
       __node_type* __p = _M_begin();
       _M_before_begin()._M_nxt = nullptr;
-      std::size_t __bbegin_bkt = 0;
+
       while (__p)
 	{
 	  __node_type* __next = __p->_M_next();
+	  // Deal with empty node now to potentially store a first empty node to
+	  // use later.
+	  if (__next && !__next->_M_pv)
+	    {
+	      __node_type* __tmp = __next->_M_next();
+	      __next->_M_nxt = __empty_nodes;
+	      __empty_nodes = __next;
+	      __next = __tmp;
+	    }
+
 	  std::size_t __bkt = __hash_code_base::_M_bucket_index(__p, __n);
 	  if (!__new_buckets[__bkt])
 	    {
-	      __p->_M_nxt = _M_before_begin()._M_nxt;
+	      if (_M_before_begin()._M_nxt)
+		{
+		  __node_empty* __empty_node;
+		  if (__empty_nodes)
+		    {
+		      // Use an existing empty node.
+		      __empty_node = __empty_nodes;
+		      __empty_nodes = __empty_nodes->_M_next();
+		    }
+		  else
+		    // We need to allocate an empty node.
+		    __empty_node = _M_allocate_empty_node();
+
+		  __empty_node->_M_nxt = _M_before_begin()._M_nxt;
+		  __p->_M_nxt = __empty_node;
+		  __new_buckets[__bbegin_bkt] = __empty_node;
+		}
+	      else
+		__p->_M_nxt = nullptr;
 	      _M_before_begin()._M_nxt = __p;
+	      __bbegin_bkt = __bkt;
 	      __new_buckets[__bkt] = &_M_before_begin();
-	      if (__p->_M_nxt)
-		__new_buckets[__bbegin_bkt] = __p;
-	      __bbegin_bkt = __bkt;
 	    }
 	  else
 	    {
@@ -1699,6 +1825,13 @@
 	    }
 	  __p = __next;
 	}
+
+      while (__empty_nodes)
+	{
+	  __node_empty* __tmp = __empty_nodes;
+	  __empty_nodes = __empty_nodes->_M_next();
+	  _M_deallocate_empty_node(__tmp);
+	}
       _M_deallocate_buckets(_M_buckets, _M_bucket_count);
       _M_bucket_count = __n;
       _M_buckets = __new_buckets;
@@ -1717,16 +1850,26 @@
     {
       __bucket_type* __new_buckets = _M_allocate_buckets(__n);
 
+      std::size_t __bbegin_bkt = 0;
+      __node_empty* __empty_nodes = nullptr;
       __node_type* __p = _M_begin();
       _M_before_begin()._M_nxt = nullptr;
-      std::size_t __bbegin_bkt = 0;
       std::size_t __prev_bkt = 0;
       __node_type* __prev_p = nullptr;
-      bool __check_bucket = false;
 
       while (__p)
 	{
 	  __node_type* __next = __p->_M_next();
+	  // Deal with empty node now to potentially store a first empty node to
+	  // use later.
+	  if (__next && !__next->_M_pv)
+	    {
+	      __node_type* __tmp = __next->_M_next();
+	      __next->_M_nxt = __empty_nodes;
+	      __empty_nodes = __next;
+	      __next = __tmp;
+	    }
+
 	  std::size_t __bkt = __hash_code_base::_M_bucket_index(__p, __n);
 
 	  if (__prev_p && __prev_bkt == __bkt)
@@ -1736,39 +1879,33 @@
 	      // relative order.
 	      __p->_M_nxt = __prev_p->_M_nxt;
 	      __prev_p->_M_nxt = __p;
-
-	      // Inserting after a node in a bucket require to check that we
-	      // haven't change the bucket last node, in this case next
-	      // bucket containing its before begin node must be updated. We
-	      // schedule a check as soon as we move out of the sequence of
-	      // equivalent nodes to limit the number of checks.
-	      __check_bucket = true;
 	    }
 	  else
 	    {
-	      if (__check_bucket)
+	      if (!__new_buckets[__bkt])
 		{
-		  // Check if we shall update the next bucket because of
-		  // insertions into __prev_bkt bucket.
-		  if (__prev_p->_M_nxt)
+		  if (_M_before_begin()._M_nxt)
 		    {
-		      std::size_t __next_bkt
-			= __hash_code_base::_M_bucket_index(__prev_p->_M_next(),
-							    __n);
-		      if (__next_bkt != __prev_bkt)
-			__new_buckets[__next_bkt] = __prev_p;
+		      __node_empty* __empty_node;
+		      if (__empty_nodes)
+			{
+			  // Use an existing empty node.
+			  __empty_node = __empty_nodes;
+			  __empty_nodes = __empty_nodes->_M_next();
+			}
+		      else
+			// We need to allocate an empty node.
+			__empty_node = _M_allocate_empty_node();
+
+		      __empty_node->_M_nxt = _M_before_begin()._M_nxt;
+		      __p->_M_nxt = __empty_node;
+		      __new_buckets[__bbegin_bkt] = __empty_node;
 		    }
-		  __check_bucket = false;
-		}
-
-	      if (!__new_buckets[__bkt])
-		{
-		  __p->_M_nxt = _M_before_begin()._M_nxt;
+		  else
+		    __p->_M_nxt = nullptr;
 		  _M_before_begin()._M_nxt = __p;
+		  __bbegin_bkt = __bkt;
 		  __new_buckets[__bkt] = &_M_before_begin();
-		  if (__p->_M_nxt)
-		    __new_buckets[__bbegin_bkt] = __p;
-		  __bbegin_bkt = __bkt;
 		}
 	      else
 		{
@@ -1781,14 +1918,12 @@
 	  __p = __next;
 	}
 
-      if (__check_bucket && __prev_p->_M_nxt)
+      while (__empty_nodes)
 	{
-	  std::size_t __next_bkt
-	    = __hash_code_base::_M_bucket_index(__prev_p->_M_next(), __n);
-	  if (__next_bkt != __prev_bkt)
-	    __new_buckets[__next_bkt] = __prev_p;
+	  __node_empty* __tmp = __empty_nodes;
+	  __empty_nodes = __empty_nodes->_M_next();
+	  _M_deallocate_empty_node(__tmp);
 	}
-
       _M_deallocate_buckets(_M_buckets, _M_bucket_count);
       _M_bucket_count = __n;
       _M_buckets = __new_buckets;
Index: include/bits/hashtable_policy.h
===================================================================
--- include/bits/hashtable_policy.h	(revision 193766)
+++ include/bits/hashtable_policy.h	(working copy)
@@ -162,40 +162,47 @@
     struct _Hash_node;
 
   /**
-   *  Specialization for nodes with caches, struct _Hash_node.
+   *  Specialization for nodes without caches, struct _Hash_node.
    *
    *  Base class is __detail::_Hash_node_base.
    */
   template<typename _Value>
-    struct _Hash_node<_Value, true> : _Hash_node_base
+    struct _Hash_node<_Value, false> : _Hash_node_base
     {
-      _Value       _M_v;
-      std::size_t  _M_hash_code;
+      _Value* _M_pv;
 
-      template<typename... _Args>
-	_Hash_node(_Args&&... __args)
-	: _M_v(std::forward<_Args>(__args)...), _M_hash_code() { }
+      _Hash_node(_Value* __v)
+	: _M_pv(__v) { }
 
       _Hash_node*
       _M_next() const { return static_cast<_Hash_node*>(_M_nxt); }
+
+      _Value&
+      _M_v() noexcept
+      { return *_M_pv; }
+
+      const _Value&
+      _M_v() const noexcept
+      { return *_M_pv; }
     };
 
   /**
-   *  Specialization for nodes without caches, struct _Hash_node.
+   *  Specialization for nodes with caches, struct _Hash_node.
    *
    *  Base class is __detail::_Hash_node_base.
    */
   template<typename _Value>
-    struct _Hash_node<_Value, false> : _Hash_node_base
+    struct _Hash_node<_Value, true> : _Hash_node<_Value, false>
     {
-      _Value       _M_v;
+      typedef _Hash_node<_Value, false> _Base;
 
-      template<typename... _Args>
-	_Hash_node(_Args&&... __args)
-	: _M_v(std::forward<_Args>(__args)...) { }
+      std::size_t _M_hash_code;
 
+      _Hash_node(_Value* __v)
+	: _Base(__v), _M_hash_code() { }
+
       _Hash_node*
-      _M_next() const { return static_cast<_Hash_node*>(_M_nxt); }
+      _M_next() const { return static_cast<_Hash_node*>(this->_M_nxt); }
     };
 
   /// Base class for node iterators.
@@ -207,11 +214,21 @@
       __node_type*  _M_cur;
 
       _Node_iterator_base(__node_type* __p)
-      : _M_cur(__p) { }
+      : _M_cur(__p) 
+      {
+	// Never stay on an empty node.
+	if (_M_cur && !_M_cur->_M_pv)
+	  _M_cur = _M_cur->_M_next();
+      }
 
       void
       _M_incr()
-      { _M_cur = _M_cur->_M_next(); }
+      {
+	_M_cur = _M_cur->_M_next();
+	if (_M_cur && !_M_cur->_M_pv)
+	  // Jump over empty nodes, there can't be consecutive ones.
+	  _M_cur = _M_cur->_M_next();
+      }
     };
 
   template<typename _Value, bool _Cache_hash_code>
@@ -255,11 +272,11 @@
 
       reference
       operator*() const
-      { return this->_M_cur->_M_v; }
+      { return this->_M_cur->_M_v(); }
 
       pointer
       operator->() const
-      { return std::__addressof(this->_M_cur->_M_v); }
+      { return std::__addressof(this->_M_cur->_M_v()); }
 
       _Node_iterator&
       operator++()
@@ -307,11 +324,11 @@
 
       reference
       operator*() const
-      { return this->_M_cur->_M_v; }
+      { return this->_M_cur->_M_v(); }
 
       pointer
       operator->() const
-      { return std::__addressof(this->_M_cur->_M_v); }
+      { return std::__addressof(this->_M_cur->_M_v()); }
 
       _Node_const_iterator&
       operator++()
@@ -566,7 +583,7 @@
 	  return __h->_M_insert_unique_node(__n, __code, __p)->second;
 	}
 
-      return (__p->_M_v).second;
+      return (__p->_M_v()).second;
     }
 
   template<typename _Key, typename _Pair, typename _Alloc, typename _Equal,
@@ -592,7 +609,7 @@
 	  return __h->_M_insert_unique_node(__n, __code, __p)->second;
 	}
 
-      return (__p->_M_v).second;
+      return (__p->_M_v()).second;
     }
 
   template<typename _Key, typename _Pair, typename _Alloc, typename _Equal,
@@ -612,7 +629,7 @@
 
       if (!__p)
 	__throw_out_of_range(__N("_Map_base::at"));
-      return (__p->_M_v).second;
+      return (__p->_M_v()).second;
     }
 
   template<typename _Key, typename _Pair, typename _Alloc, typename _Equal,
@@ -632,7 +649,7 @@
 
       if (!__p)
 	__throw_out_of_range(__N("_Map_base::at"));
-      return (__p->_M_v).second;
+      return (__p->_M_v()).second;
     }
 
   /**
@@ -998,7 +1015,7 @@
 
       std::size_t
       _M_bucket_index(const __node_type* __p, std::size_t __n) const
-      { return _M_ranged_hash()(_M_extract()(__p->_M_v), __n); }
+      { return _M_ranged_hash()(_M_extract()(__p->_M_v()), __n); }
 
       void
       _M_store_code(__node_type*, __hash_code) const
@@ -1085,7 +1102,7 @@
       std::size_t
       _M_bucket_index(const __node_type* __p,
 		      std::size_t __n) const
-      { return _M_h2()(_M_h1()(_M_extract()(__p->_M_v)), __n); }
+      { return _M_h2()(_M_h1()(_M_extract()(__p->_M_v())), __n); }
 
       void
       _M_store_code(__node_type*, __hash_code) const
@@ -1219,7 +1236,7 @@
     static bool
     _S_equals(const _Equal& __eq, const _ExtractKey& __extract,
 	      const _Key& __k, _HashCodeType __c, _Hash_node<_Value, true>* __n)
-    { return __c == __n->_M_hash_code && __eq(__k, __extract(__n->_M_v)); }
+    { return __c == __n->_M_hash_code && __eq(__k, __extract(__n->_M_v())); }
   };
 
   /// Specialization.
@@ -1230,7 +1247,7 @@
     static bool
     _S_equals(const _Equal& __eq, const _ExtractKey& __extract,
 	      const _Key& __k, _HashCodeType, _Hash_node<_Value, false>* __n)
-    { return __eq(__k, __extract(__n->_M_v)); }
+    { return __eq(__k, __extract(__n->_M_v())); }
   };
 
 
@@ -1240,98 +1257,41 @@
    *  Base class for local iterators, used to iterate within a bucket
    *  but not between buckets.
    */
-  template<typename _Key, typename _Value, typename _ExtractKey,
-	   typename _H1, typename _H2, typename _Hash,
-	   bool __cache_hash_code>
-    struct _Local_iterator_base;
-
-  /// Specialization.
-  template<typename _Key, typename _Value, typename _ExtractKey,
-	   typename _H1, typename _H2, typename _Hash>
-    struct _Local_iterator_base<_Key, _Value, _ExtractKey,
-				_H1, _H2, _Hash, true>
-    : private _H2
+  template<typename _Value, bool __cache>
+    struct _Local_iterator_base
     {
       _Local_iterator_base() = default;
-      _Local_iterator_base(_Hash_node<_Value, true>* __p,
-			   std::size_t __bkt, std::size_t __bkt_count)
-      : _M_cur(__p), _M_bucket(__bkt), _M_bucket_count(__bkt_count) { }
+      _Local_iterator_base(_Hash_node<_Value, __cache>* __p)
+      : _M_cur(__p) { }
 
       void
       _M_incr()
       {
 	_M_cur = _M_cur->_M_next();
-	if (_M_cur)
-	  {
-	    std::size_t __bkt = _M_h2()(_M_cur->_M_hash_code, _M_bucket_count);
-	    if (__bkt != _M_bucket)
-	      _M_cur = nullptr;
-	  }
+	if (_M_cur && !_M_cur->_M_pv)
+	  _M_cur = nullptr;
       }
 
-      const _H2& _M_h2() const
-      { return *this; }
-
-      _Hash_node<_Value, true>*  _M_cur;
-      std::size_t _M_bucket;
-      std::size_t _M_bucket_count;
+      _Hash_node<_Value, __cache>*  _M_cur;
     };
 
-  /// Specialization.
-  template<typename _Key, typename _Value, typename _ExtractKey,
-	   typename _H1, typename _H2, typename _Hash>
-    struct _Local_iterator_base<_Key, _Value, _ExtractKey,
-				_H1, _H2, _Hash, false>
-    : private _Hash_code_base<_Key, _Value, _ExtractKey,
-			      _H1, _H2, _Hash, false>
-    {
-      _Local_iterator_base() = default;
-      _Local_iterator_base(_Hash_node<_Value, false>* __p,
-			   std::size_t __bkt, std::size_t __bkt_count)
-      : _M_cur(__p), _M_bucket(__bkt), _M_bucket_count(__bkt_count) { }
-
-      void
-      _M_incr()
-      {
-	_M_cur = _M_cur->_M_next();
-	if (_M_cur)
-	  {
-	    std::size_t __bkt = this->_M_bucket_index(_M_cur, _M_bucket_count);
-	    if (__bkt != _M_bucket)
-	      _M_cur = nullptr;
-	  }
-      }
-
-      _Hash_node<_Value, false>*  _M_cur;
-      std::size_t _M_bucket;
-      std::size_t _M_bucket_count;
-    };
-
-  template<typename _Key, typename _Value, typename _ExtractKey,
-	   typename _H1, typename _H2, typename _Hash, bool __cache>
+  template<typename _Value, bool __cache>
     inline bool
-    operator==(const _Local_iterator_base<_Key, _Value, _ExtractKey,
-					  _H1, _H2, _Hash, __cache>& __x,
-	       const _Local_iterator_base<_Key, _Value, _ExtractKey,
-					  _H1, _H2, _Hash, __cache>& __y)
+    operator==(const _Local_iterator_base<_Value, __cache>& __x,
+	       const _Local_iterator_base<_Value, __cache>& __y)
     { return __x._M_cur == __y._M_cur; }
 
-  template<typename _Key, typename _Value, typename _ExtractKey,
-	   typename _H1, typename _H2, typename _Hash, bool __cache>
+  template<typename _Value, bool __cache>
     inline bool
-    operator!=(const _Local_iterator_base<_Key, _Value, _ExtractKey,
-					  _H1, _H2, _Hash, __cache>& __x,
-	       const _Local_iterator_base<_Key, _Value, _ExtractKey,
-					  _H1, _H2, _Hash, __cache>& __y)
+    operator!=(const _Local_iterator_base<_Value, __cache>& __x,
+	       const _Local_iterator_base<_Value, __cache>& __y)
     { return __x._M_cur != __y._M_cur; }
 
   /// local iterators
-  template<typename _Key, typename _Value, typename _ExtractKey,
-	   typename _H1, typename _H2, typename _Hash,
+  template<typename _Value,
 	   bool __constant_iterators, bool __cache>
     struct _Local_iterator
-    : public _Local_iterator_base<_Key, _Value, _ExtractKey,
-				  _H1, _H2, _Hash, __cache>
+    : public _Local_iterator_base<_Value, __cache>
     {
       typedef _Value                                   value_type;
       typedef typename std::conditional<__constant_iterators,
@@ -1346,19 +1306,17 @@
       _Local_iterator() = default;
 
       explicit
-      _Local_iterator(_Hash_node<_Value, __cache>* __p,
-		      std::size_t __bkt, std::size_t __bkt_count)
-      : _Local_iterator_base<_Key, _Value, _ExtractKey, _H1, _H2, _Hash,
-			     __cache>(__p, __bkt, __bkt_count)
+      _Local_iterator(_Hash_node<_Value, __cache>* __p)
+      : _Local_iterator_base<_Value, __cache>(__p)
       { }
 
       reference
       operator*() const
-      { return this->_M_cur->_M_v; }
+      { return this->_M_cur->_M_v(); }
 
       pointer
       operator->() const
-      { return std::__addressof(this->_M_cur->_M_v); }
+      { return std::__addressof(this->_M_cur->_M_v()); }
 
       _Local_iterator&
       operator++()
@@ -1377,12 +1335,10 @@
     };
 
   /// local const_iterators
-  template<typename _Key, typename _Value, typename _ExtractKey,
-	   typename _H1, typename _H2, typename _Hash,
+  template<typename _Value,
 	   bool __constant_iterators, bool __cache>
     struct _Local_const_iterator
-    : public _Local_iterator_base<_Key, _Value, _ExtractKey,
-				  _H1, _H2, _Hash, __cache>
+    : public _Local_iterator_base<_Value, __cache>
     {
       typedef _Value                                   value_type;
       typedef const _Value*                            pointer;
@@ -1393,28 +1349,23 @@
       _Local_const_iterator() = default;
 
       explicit
-      _Local_const_iterator(_Hash_node<_Value, __cache>* __p,
-			    std::size_t __bkt, std::size_t __bkt_count)
-      : _Local_iterator_base<_Key, _Value, _ExtractKey, _H1, _H2, _Hash,
-			     __cache>(__p, __bkt, __bkt_count)
+      _Local_const_iterator(_Hash_node<_Value, __cache>* __p)
+	: _Local_iterator_base<_Value, __cache>(__p)
       { }
 
-      _Local_const_iterator(const _Local_iterator<_Key, _Value, _ExtractKey,
-						  _H1, _H2, _Hash,
+      _Local_const_iterator(const _Local_iterator<_Value,
 						  __constant_iterators,
 						  __cache>& __x)
-      : _Local_iterator_base<_Key, _Value, _ExtractKey, _H1, _H2, _Hash,
-			     __cache>(__x._M_cur, __x._M_bucket,
-				      __x._M_bucket_count)
+      : _Local_iterator_base<_Value, __cache>(__x._M_cur)
       { }
 
       reference
       operator*() const
-      { return this->_M_cur->_M_v; }
+      { return this->_M_cur->_M_v(); }
 
       pointer
       operator->() const
-      { return std::__addressof(this->_M_cur->_M_v); }
+      { return std::__addressof(this->_M_cur->_M_v()); }
 
       _Local_const_iterator&
       operator++()
@@ -1477,14 +1428,11 @@
 						   __constant_iterators::value,
 						   __hash_cached::value>;
 
-    using local_iterator = __detail::_Local_iterator<key_type, value_type,
-						  _ExtractKey, _H1, _H2, _Hash,
+    using local_iterator = __detail::_Local_iterator<value_type,
 						  __constant_iterators::value,
 						     __hash_cached::value>;
 
-    using const_local_iterator = __detail::_Local_const_iterator<key_type,
-								 value_type,
-					_ExtractKey, _H1, _H2, _Hash,
+    using const_local_iterator = __detail::_Local_const_iterator<value_type,
 					__constant_iterators::value,
 					__hash_cached::value>;
 
@@ -1502,6 +1450,7 @@
 
   protected:
     using __node_base = __detail::_Hash_node_base;
+    using __node_empty = __detail::_Hash_node<_Value, false>;
     using __bucket_type = __node_base*;
 
     _Hashtable_base(const _ExtractKey& __ex, const _H1& __h1, const _H2& __h2,

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]