This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Review Hashtable extract node API


On 07/06/19 18:39 +0200, François Dumont wrote:
On 6/5/19 6:22 PM, Jonathan Wakely wrote:
On 04/06/19 19:19 +0200, François Dumont wrote:
Hi

    Here is a patch to enhance the _Hashtable extract node API and fix a FIXME request.

    The enhancement to the extract node Api is that extract(const key_type&) do not call extract(const_iterator) anymore. Doing so we had to loop again through bucket nodes to find the previous node to the one to extract. Even if a bucket shall not contain many nodes (in unique key mode) it's easy to avoid it.

Nice.

    To fix the FIXME I introduced a node smart pointer type managing the node lifetime. The node is extracted from this smart pointer only when there can't be any exception raised. In the context of the node extract api the node handle is considered as a smart pointer. So the node handle will remain owner of the node in case of exception when reinserting it, I hope it is the expected behavior.

Yes, that's right.

I was going to suggest just using the node handle type instead of
inventing a new smart pointer, but the handle type uses std::optional
so isn't available for C++11/14.

I considered it too, or even use shared_ptr but thought it was overkill.



    * include/bits/hashtable_policy.h
    (struct _NodeSmartPointer<_NodeAlloc>): New.
    (_Map_base<>::operator[](const key_type&)): Use latter, adapt.
    (_Map_base<>::operator[](key_type&&)): Likewise.
    * include/bits/hashtable.h
    (_Hashtable<>::__node_sp_t): New.
    (_Hashtable<>::_M_insert_unique_node(size_type, __hash_code,
    __node_type*, size_type)): Replace by...
(_Hashtable<>::_M_insert_unique_node<_NodeAccessor>(const key_type&,
    size_type, __hash_code, const _NodeAccessor&, size_type)): ...that.
    (_Hashtable<>::_M_insert_multi_node(__node_type*, __hash_code,
    __node_type*)): Replace by...
(_Hashtable<>::_M_insert_multi_node<_NodeAccessor>(__node_type*,
    __hash_code, const _NodeAccessor&)): ...that.
    (_Hashtable<>::_M_reinsert_node): Adapt.
    (_Hashtable<>::_M_reinsert_node_multi): Adapt.
    (_Hashtable<>::_M_extract_node(size_t, __node_base*)): New.
    (_Hashtable<>::extract(const_iterator)): Use latter.
    (_Hashtable<>::extract(const _Key&)): Likewise.
    (_Hashtable<>::_M_merge_unique): Adapt.
    (_Hashtable<>::_M_emplace<_Args>(true_type, _Args&&...)): Adapt.
    (_Hashtable<>::_M_emplace<_Args>(const_iterator, false_type,
    _Args&&...)): Adapt.

Tested under Linux x86_64.

Ok to commit ?

François


diff --git a/libstdc++-v3/include/bits/hashtable.h b/libstdc++-v3/include/bits/hashtable.h
index e2e3f016a35..307865b96bf 100644
--- a/libstdc++-v3/include/bits/hashtable.h
+++ b/libstdc++-v3/include/bits/hashtable.h
@@ -197,6 +197,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
      using __hash_cached = typename __traits_type::__hash_cached;
      using __node_type = __detail::_Hash_node<_Value, __hash_cached::value>;
      using __node_alloc_type = __alloc_rebind<_Alloc, __node_type>;
+      using __node_sp_t = __detail::_NodeSmartPointer<__node_alloc_type>;

      using __hashtable_alloc = __detail::_Hashtable_alloc<__node_alloc_type>;

@@ -669,18 +670,19 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
      __node_base*
      _M_get_previous_node(size_type __bkt, __node_base* __n);

-      // Insert node with hash code __code, in bucket bkt if no rehash (assumes -      // no element with its key already present). Take ownership of the node,
-      // deallocate it on exception.
+      // Insert node with key __k and hash code __code, in bucket __bkt if no
+      // rehash (assumes no element with its key already present).
+      template<typename _NodeAccessor>
    iterator
-      _M_insert_unique_node(size_type __bkt, __hash_code __code,
-                __node_type* __n, size_type __n_elt = 1);
+    _M_insert_unique_node(const key_type& __k, size_type __bkt,
+                  __hash_code __code, const _NodeAccessor&,
+                  size_type __n_elt = 1);

-      // Insert node with hash code __code. Take ownership of the node,
-      // deallocate it on exception.
+      // Insert node with hash code __code.
+      template<typename _NodeAccessor>
    iterator
-      _M_insert_multi_node(__node_type* __hint,
-               __hash_code __code, __node_type* __n);
+    _M_insert_multi_node(__node_type* __hint, __hash_code __code,
+                 const _NodeAccessor& __node_accessor);

It looks like most times you call these functions you pass an
identical lambda expression, but each of those lambda expressions will
create a unique type. That means you create different instantiations
of the function templates even though they do exactly the same thing.

That's just generating multiple copies of identical code. Passing in a
function object to provide the node pointer doesn't really seem
necessary anyway, so if it results in larger executables it's really
not desirable.


Maybe one the reasons we have a difference when running performance tests in -03. I don't know why I abused so much of lambdas when I see your proposal.



The attached patch still just passes in a node pointer (which the
function takes ownership of, unless it throws). Because the semantics
of _M_insert_multi_node change, we need the symbol name to change as
well (so old code linking to the old symbol doesn't find the new
definition with different semantics). Passing in the key makes it a
different symbol (in almost all cases the caller has the key available
already anyway).
Very nice.

An alternative design would be for _M_insert_(unique|multi)_node to
take a reference to the RAII type and set its _M_node member to null
after taking ownership of that pointer. That would be more explicit
about the ownership transfer, however it would require changes to the
_M_reinsert_node and _M_reinsert_node_multi functions which don't use
the RAII type (because the __node_type* is already owned by a node
handle).


      template<typename... _Args>
    std::pair<iterator, bool>
@@ -805,9 +807,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
          }
        else
          {
-        __ret.position
-          = _M_insert_unique_node(__bkt, __code, __nh._M_ptr);
-        __nh._M_ptr = nullptr;
+        __ret.position = _M_insert_unique_node(__k, __bkt, __code,
+                [&__nh]()
+                { return std::exchange(__nh._M_ptr, nullptr); });

The existing code here can just be changed to pass in __k, but still
set __nh._M_ptr to null after the call returns.

i.e.

        __ret.position
          = _M_insert_unique_node(__k, __bkt, __code, __nh._M_ptr);
        __nh._M_ptr = nullptr;


        __ret.inserted = true;
          }
      }
@@ -818,33 +820,23 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
      iterator
      _M_reinsert_node_multi(const_iterator __hint, node_type&& __nh)
      {
-    iterator __ret;
    if (__nh.empty())
-      __ret = end();
-    else
-      {
+      return end();
+
    __glibcxx_assert(get_allocator() == __nh.get_allocator());

    auto __code = this->_M_hash_code(__nh._M_key());
-        auto __node = std::exchange(__nh._M_ptr, nullptr);
-        // FIXME: this deallocates the node on exception.
-        __ret = _M_insert_multi_node(__hint._M_cur, __code, __node);
-      }
-    return __ret;
+    return _M_insert_multi_node(__hint._M_cur, __code,
+                  [&__nh]()
+                  { return std::exchange(__nh._M_ptr, nullptr); });

Now that the call won't deallocate anything, we can fix the FIXME by
just delaying resetting __nh._M_ptr until the call has returned:

       auto __ret = _M_insert_multi_node(__hint._M_cur, __code, __node);
       __nh._M_ptr = nullptr;
       return __ret;


      }

+    private:
      /// Extract a node.

We don't want a Doxygen "///" comment on a private function, just "//"
is OK. In this case I don't think we need any comment, the
_M_extract_node name is clear enough. Saying "extract a node" adds
nothing.


      node_type
-      extract(const_iterator __pos)
+      _M_extract_node(size_t __bkt, __node_base* __prev_n)
      {
-    __node_type* __n = __pos._M_cur;
-    size_t __bkt = _M_bucket_index(__n);
-
-    // Look for previous node to unlink it from the erased one, this
-    // is why we need buckets to contain the before begin to make
-    // this search fast.
-    __node_base* __prev_n = _M_get_previous_node(__bkt, __n);
-
+    __node_type* __n = static_cast<__node_type*>(__prev_n->_M_nxt);
    if (__prev_n == _M_buckets[__bkt])
      _M_remove_bucket_begin(__bkt, __n->_M_next(),
         __n->_M_nxt ? _M_bucket_index(__n->_M_next()) : 0);
@@ -861,14 +853,24 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
    return { __n, this->_M_node_allocator() };
      }

+    public:
+      /// Extract a node.
+      node_type
+      extract(const_iterator __pos)
+      {
+    size_t __bkt = _M_bucket_index(__pos._M_cur);
+    return _M_extract_node(__bkt, _M_get_previous_node(__bkt, __pos._M_cur));
+      }
+
      /// Extract a node.
      node_type
      extract(const _Key& __k)
      {
    node_type __nh;
-    auto __pos = find(__k);
-    if (__pos != end())
-      __nh = extract(const_iterator(__pos));
+    __hash_code __code = this->_M_hash_code(__k);
+    std::size_t __bkt = _M_bucket_index(__k, __code);
+    if (__node_base* __prev_node = _M_find_before_node(__bkt, __k, __code))
+      __nh = _M_extract_node(__bkt, __prev_node);
    return __nh;
      }

Looks good.

@@ -885,14 +887,16 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
      for (auto __i = __src.begin(), __end = __src.end(); __i != __end;)
        {
          auto __pos = __i++;
-          const key_type& __k = this->_M_extract()(__pos._M_cur->_M_v());
+          const key_type& __k = this->_M_extract()(*__pos);
          __hash_code __code = this->_M_hash_code(__k);
          size_type __bkt = _M_bucket_index(__k, __code);
          if (_M_find_node(__bkt, __k, __code) == nullptr)
        {
          auto __nh = __src.extract(__pos);
-          _M_insert_unique_node(__bkt, __code, __nh._M_ptr, __n_elt);
-          __nh._M_ptr = nullptr;
+          _M_insert_unique_node(__k, __bkt, __code,
+                [&__nh]()
+                { return std::exchange(__nh._M_ptr, nullptr); },
+                __n_elt);

Again, the old code seems fine. Just pass in __k.

          __n_elt = 1;
        }
          else if (__n_elt != 1)
@@ -1634,31 +1638,25 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
      -> pair<iterator, bool>
      {
    // First build the node to get access to the hash code
-    __node_type* __node
-      = this->_M_allocate_node(std::forward<_Args>(__args)...);
-    const key_type& __k = this->_M_extract()(__node->_M_v());
-    __hash_code __code;
-    __try
-      {
-        __code = this->_M_hash_code(__k);
-      }
-    __catch(...)
-      {
-        this->_M_deallocate_node(__node);

Aside: It's confusing that _M_allocate_node actually allocates a node
and constructs an element, and _M_deallocate_node destroys the element
and deallocates the node (and we have _M_deallocate_node_ptr which
just does the deallocation part). We should add comments explaining
that.


diff --git a/libstdc++-v3/include/bits/hashtable_policy.h b/libstdc++-v3/include/bits/hashtable_policy.h
index 878154ae210..ddcff0ec9d0 100644
--- a/libstdc++-v3/include/bits/hashtable_policy.h
+++ b/libstdc++-v3/include/bits/hashtable_policy.h
@@ -170,6 +170,40 @@ namespace __detail
      __hashtable_alloc& _M_h;
    };

+  // Node smart pointer to benefit from RAII.
+  template<typename _NodeAlloc>
+    struct _NodeSmartPointer
+    {
+    private:
+      using __hashtable_alloc = _Hashtable_alloc<_NodeAlloc>;
+      using __node_type = typename __hashtable_alloc::__node_type;
+
+    public:
+      _NodeSmartPointer(__hashtable_alloc& __h, __node_type* __node) noexcept
+      : _M_h(__h), _M_node(__node) { }
+
+      ~_NodeSmartPointer()
+      {
+    if (_M_node)
+      _M_h._M_deallocate_node(_M_node);
+      }
+
+      __node_type*
+      _M_get() const { return _M_node; }
+
+      __node_type*
+      _M_release() noexcept
+      {
+    __node_type* __tmp = _M_node;
+    _M_node = nullptr;
+    return __tmp;
+      }
+
+    private:
+      __hashtable_alloc& _M_h;
+      __node_type* _M_node;
+    };

This can be defined as a member of _Hashtable, and can be much
simpler:

     // Simple RAII type for managing a node containing an element
     struct _Scoped_node
     {
    _Scoped_node(__hashtable_alloc* __h, __node_type* __n)
    : _M_h(__h), _M_node(__n) { }

    ~_Scoped_node() { if (_M_node) _M_h->_M_deallocate_node(_M_node); };

    _Scoped_node(const _Scoped_node&) = delete;
    _Scoped_node& operator=(const _Scoped_node&) = delete;

    __hashtable_alloc* _M_h;
    __node_type* _M_node;
     };

I've attached a patch implementing these suggested changes. I think
it's simpler this way, what do you think?

I think it is much better than what I proposed.



(This patch is generated with 'git diff -b' so whitespace changes
aren't shown, so this shouldn't be applied directly. I can send
another version with the whitespace changes that is suitable to
commit).

Mine was generated with 'git diff -w' for the same reason.

You can send the full patch to me if you prefer but feel free to commit it yourself.

Here's what I've committed. Compared to the previous patch this adds a
constructor to _Scoped_node that creates the new node, so the caller
doesn't have to create it:

      // Allocate a node and construct an element within it.
      template<typename... _Args>
        _Scoped_node(__hashtable_alloc* __h, _Args&&... __args)
        : _M_h(__h),
          _M_node(__h->_M_allocate_node(std::forward<_Args>(__args)...))
        { }

Tested x86_64-linux, committed to trunk.


commit 65b6d828762c2568c9de850a9a4943d5ef5f87f1
Author: Jonathan Wakely <jwakely@redhat.com>
Date:   Mon Jun 17 10:02:19 2019 +0100

    Simplify node ownership in _Hashtable members
    
    Introduce an RAII type to manage nodes in unordered containers while
    they are being inserted. If the caller always owns a node until it is
    inserted, then the insertion functions don't need to deallocate on
    failure. This allows a FIXME in the node re-insertion API to be removed.
    
    Also change extract(const key_type&) to not call extract(const_iterator)
    anymore.  This avoids looping through the bucket nodes again to find the
    node before the one being extracted.
    
    2019-06-17  Fran??ois Dumont  <fdumont@gcc.gnu.org>
                Jonathan Wakely  <jwakely@redhat.com>
    
            * include/bits/hashtable.h (struct _Hashtable::_Scoped_node): New type.
            (_Hashtable::_M_insert_unique_node): Add key_type parameter. Don't
            deallocate node if insertion fails.
            (_Hashtable::_M_insert_multi_node): Likewise.
            (_Hashtable::_M_reinsert_node): Pass additional key argument.
            (_Hashtable::_M_reinsert_node_multi): Likewise. Remove FIXME.
            (_Hashtable::_M_extract_node(size_t, __node_base*)): New function.
            (_Hashtable::extract(const_iterator)): Use _M_extract_node.
            (_Hashtable::extract(const _Key&)): Likewise.
            (_Hashtable::_M_merge_unique): Pass additional key argument.
            (_Hashtable::_M_emplace<Args>(true_type, Args&&...)): Likewise. Use
            _Scoped_node.
            (_Hashtable::_M_insert): Likewise.
            * include/bits/hashtable_policy.h (_Map_base::operator[]): Likewise.
            (_Hashtable_alloc): Add comments to functions with misleading names.

diff --git a/libstdc++-v3/include/bits/hashtable.h b/libstdc++-v3/include/bits/hashtable.h
index e2e3f016a35..ab579a7059e 100644
--- a/libstdc++-v3/include/bits/hashtable.h
+++ b/libstdc++-v3/include/bits/hashtable.h
@@ -256,6 +256,30 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       using __reuse_or_alloc_node_gen_t =
 	__detail::_ReuseOrAllocNode<__node_alloc_type>;
 
+      // Simple RAII type for managing a node containing an element
+      struct _Scoped_node
+      {
+	// Take ownership of a node with a constructed element.
+	_Scoped_node(__node_type* __n, __hashtable_alloc* __h)
+	: _M_h(__h), _M_node(__n) { }
+
+	// Allocate a node and construct an element within it.
+	template<typename... _Args>
+	  _Scoped_node(__hashtable_alloc* __h, _Args&&... __args)
+	  : _M_h(__h),
+	    _M_node(__h->_M_allocate_node(std::forward<_Args>(__args)...))
+	  { }
+
+	// Destroy element and deallocate node.
+	~_Scoped_node() { if (_M_node) _M_h->_M_deallocate_node(_M_node); };
+
+	_Scoped_node(const _Scoped_node&) = delete;
+	_Scoped_node& operator=(const _Scoped_node&) = delete;
+
+	__hashtable_alloc* _M_h;
+	__node_type* _M_node;
+      };
+
       // Metaprogramming for picking apart hash caching.
       template<typename _Cond>
 	using __if_hash_cached = __or_<__not_<__hash_cached>, _Cond>;
@@ -669,17 +693,18 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       __node_base*
       _M_get_previous_node(size_type __bkt, __node_base* __n);
 
-      // Insert node with hash code __code, in bucket bkt if no rehash (assumes
-      // no element with its key already present). Take ownership of the node,
-      // deallocate it on exception.
+      // Insert node __n with key __k and hash code __code, in bucket __bkt
+      // if no rehash (assumes no element with same key already present).
+      // Takes ownership of __n if insertion succeeds, throws otherwise.
       iterator
-      _M_insert_unique_node(size_type __bkt, __hash_code __code,
-			    __node_type* __n, size_type __n_elt = 1);
+      _M_insert_unique_node(const key_type& __k, size_type __bkt,
+			    __hash_code __code, __node_type* __n,
+			    size_type __n_elt = 1);
 
-      // Insert node with hash code __code. Take ownership of the node,
-      // deallocate it on exception.
+      // Insert node __n with key __k and hash code __code.
+      // Takes ownership of __n if insertion succeeds, throws otherwise.
       iterator
-      _M_insert_multi_node(__node_type* __hint,
+      _M_insert_multi_node(__node_type* __hint, const key_type& __k,
 			   __hash_code __code, __node_type* __n);
 
       template<typename... _Args>
@@ -806,7 +831,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	    else
 	      {
 		__ret.position
-		  = _M_insert_unique_node(__bkt, __code, __nh._M_ptr);
+		  = _M_insert_unique_node(__k, __bkt, __code, __nh._M_ptr);
 		__nh._M_ptr = nullptr;
 		__ret.inserted = true;
 	      }
@@ -818,33 +843,24 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       iterator
       _M_reinsert_node_multi(const_iterator __hint, node_type&& __nh)
       {
-	iterator __ret;
 	if (__nh.empty())
-	  __ret = end();
-	else
-	  {
-	    __glibcxx_assert(get_allocator() == __nh.get_allocator());
+	  return end();
 
-	    auto __code = this->_M_hash_code(__nh._M_key());
-	    auto __node = std::exchange(__nh._M_ptr, nullptr);
-	    // FIXME: this deallocates the node on exception.
-	    __ret = _M_insert_multi_node(__hint._M_cur, __code, __node);
-	  }
+	__glibcxx_assert(get_allocator() == __nh.get_allocator());
+
+	const key_type& __k = __nh._M_key();
+	auto __code = this->_M_hash_code(__k);
+	auto __ret
+	  = _M_insert_multi_node(__hint._M_cur, __k, __code, __nh._M_ptr);
+	__nh._M_ptr = nullptr;
 	return __ret;
       }
 
-      /// Extract a node.
+    private:
       node_type
-      extract(const_iterator __pos)
+      _M_extract_node(size_t __bkt, __node_base* __prev_n)
       {
-	__node_type* __n = __pos._M_cur;
-	size_t __bkt = _M_bucket_index(__n);
-
-	// Look for previous node to unlink it from the erased one, this
-	// is why we need buckets to contain the before begin to make
-	// this search fast.
-	__node_base* __prev_n = _M_get_previous_node(__bkt, __n);
-
+	__node_type* __n = static_cast<__node_type*>(__prev_n->_M_nxt);
 	if (__prev_n == _M_buckets[__bkt])
 	  _M_remove_bucket_begin(__bkt, __n->_M_next(),
 	     __n->_M_nxt ? _M_bucket_index(__n->_M_next()) : 0);
@@ -861,14 +877,25 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	return { __n, this->_M_node_allocator() };
       }
 
+    public:
+      // Extract a node.
+      node_type
+      extract(const_iterator __pos)
+      {
+	size_t __bkt = _M_bucket_index(__pos._M_cur);
+	return _M_extract_node(__bkt,
+			       _M_get_previous_node(__bkt, __pos._M_cur));
+      }
+
       /// Extract a node.
       node_type
       extract(const _Key& __k)
       {
 	node_type __nh;
-	auto __pos = find(__k);
-	if (__pos != end())
-	  __nh = extract(const_iterator(__pos));
+	__hash_code __code = this->_M_hash_code(__k);
+	std::size_t __bkt = _M_bucket_index(__k, __code);
+	if (__node_base* __prev_node = _M_find_before_node(__bkt, __k, __code))
+	  __nh = _M_extract_node(__bkt, __prev_node);
 	return __nh;
       }
 
@@ -885,13 +912,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	  for (auto __i = __src.begin(), __end = __src.end(); __i != __end;)
 	    {
 	      auto __pos = __i++;
-	      const key_type& __k = this->_M_extract()(__pos._M_cur->_M_v());
+	      const key_type& __k = this->_M_extract()(*__pos);
 	      __hash_code __code = this->_M_hash_code(__k);
 	      size_type __bkt = _M_bucket_index(__k, __code);
 	      if (_M_find_node(__bkt, __k, __code) == nullptr)
 		{
 		  auto __nh = __src.extract(__pos);
-		  _M_insert_unique_node(__bkt, __code, __nh._M_ptr, __n_elt);
+		  _M_insert_unique_node(__k, __bkt, __code, __nh._M_ptr,
+					__n_elt);
 		  __nh._M_ptr = nullptr;
 		  __n_elt = 1;
 		}
@@ -1634,31 +1662,18 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       -> pair<iterator, bool>
       {
 	// First build the node to get access to the hash code
-	__node_type* __node
-	  = this->_M_allocate_node(std::forward<_Args>(__args)...);
-	const key_type& __k = this->_M_extract()(__node->_M_v());
-	__hash_code __code;
-	__try
-	  {
-	    __code = this->_M_hash_code(__k);
-	  }
-	__catch(...)
-	  {
-	    this->_M_deallocate_node(__node);
-	    __throw_exception_again;
-	  }
-
+	_Scoped_node __node { this, std::forward<_Args>(__args)...  };
+	const key_type& __k = this->_M_extract()(__node._M_node->_M_v());
+	__hash_code __code = this->_M_hash_code(__k);
 	size_type __bkt = _M_bucket_index(__k, __code);
 	if (__node_type* __p = _M_find_node(__bkt, __k, __code))
-	  {
-	    // There is already an equivalent node, no insertion
-	    this->_M_deallocate_node(__node);
-	    return std::make_pair(iterator(__p), false);
-	  }
+	  // There is already an equivalent node, no insertion
+	  return std::make_pair(iterator(__p), false);
 
 	// Insert the node
-	return std::make_pair(_M_insert_unique_node(__bkt, __code, __node),
-			      true);
+	auto __pos = _M_insert_unique_node(__k, __bkt, __code, __node._M_node);
+	__node._M_node = nullptr;
+	return { __pos, true };
       }
 
   template<typename _Key, typename _Value,
@@ -1673,21 +1688,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       -> iterator
       {
 	// First build the node to get its hash code.
-	__node_type* __node =
-	  this->_M_allocate_node(std::forward<_Args>(__args)...);
+	_Scoped_node __node { this, std::forward<_Args>(__args)...  };
+	const key_type& __k = this->_M_extract()(__node._M_node->_M_v());
 
-	__hash_code __code;
-	__try
-	  {
-	    __code = this->_M_hash_code(this->_M_extract()(__node->_M_v()));
-	  }
-	__catch(...)
-	  {
-	    this->_M_deallocate_node(__node);
-	    __throw_exception_again;
-	  }
-
-	return _M_insert_multi_node(__hint._M_cur, __code, __node);
+	__hash_code __code = this->_M_hash_code(__k);
+	auto __pos
+	  = _M_insert_multi_node(__hint._M_cur, __k, __code, __node._M_node);
+	__node._M_node = nullptr;
+	return __pos;
       }
 
   template<typename _Key, typename _Value,
@@ -1697,8 +1705,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     auto
     _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
 	       _H1, _H2, _Hash, _RehashPolicy, _Traits>::
-    _M_insert_unique_node(size_type __bkt, __hash_code __code,
-			  __node_type* __node, size_type __n_elt)
+    _M_insert_unique_node(const key_type& __k, size_type __bkt,
+			  __hash_code __code, __node_type* __node,
+			  size_type __n_elt)
     -> iterator
     {
       const __rehash_state& __saved_state = _M_rehash_policy._M_state();
@@ -1706,31 +1715,20 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	= _M_rehash_policy._M_need_rehash(_M_bucket_count, _M_element_count,
 					  __n_elt);
 
-      __try
+      if (__do_rehash.first)
 	{
-	  if (__do_rehash.first)
-	    {
-	      _M_rehash(__do_rehash.second, __saved_state);
-	      __bkt
-		= _M_bucket_index(this->_M_extract()(__node->_M_v()), __code);
-	    }
-
-	  this->_M_store_code(__node, __code);
-
-	  // Always insert at the beginning of the bucket.
-	  _M_insert_bucket_begin(__bkt, __node);
-	  ++_M_element_count;
-	  return iterator(__node);
-	}
-      __catch(...)
-	{
-	  this->_M_deallocate_node(__node);
-	  __throw_exception_again;
+	  _M_rehash(__do_rehash.second, __saved_state);
+	  __bkt = _M_bucket_index(__k, __code);
 	}
+
+      this->_M_store_code(__node, __code);
+
+      // Always insert at the beginning of the bucket.
+      _M_insert_bucket_begin(__bkt, __node);
+      ++_M_element_count;
+      return iterator(__node);
     }
 
-  // Insert node, in bucket bkt if no rehash (assumes no element with its key
-  // already present). Take ownership of the node, deallocate it on exception.
   template<typename _Key, typename _Value,
 	   typename _Alloc, typename _ExtractKey, typename _Equal,
 	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
@@ -1738,60 +1736,50 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     auto
     _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
 	       _H1, _H2, _Hash, _RehashPolicy, _Traits>::
-    _M_insert_multi_node(__node_type* __hint, __hash_code __code,
-			 __node_type* __node)
+    _M_insert_multi_node(__node_type* __hint, const key_type& __k,
+			 __hash_code __code, __node_type* __node)
     -> iterator
     {
       const __rehash_state& __saved_state = _M_rehash_policy._M_state();
       std::pair<bool, std::size_t> __do_rehash
 	= _M_rehash_policy._M_need_rehash(_M_bucket_count, _M_element_count, 1);
 
-      __try
-	{
-	  if (__do_rehash.first)
-	    _M_rehash(__do_rehash.second, __saved_state);
+      if (__do_rehash.first)
+	_M_rehash(__do_rehash.second, __saved_state);
 
-	  this->_M_store_code(__node, __code);
-	  const key_type& __k = this->_M_extract()(__node->_M_v());
-	  size_type __bkt = _M_bucket_index(__k, __code);
+      this->_M_store_code(__node, __code);
+      size_type __bkt = _M_bucket_index(__k, __code);
 
-	  // Find the node before an equivalent one or use hint if it exists and
-	  // if it is equivalent.
-	  __node_base* __prev
-	    = __builtin_expect(__hint != nullptr, false)
-	      && this->_M_equals(__k, __code, __hint)
-		? __hint
-		: _M_find_before_node(__bkt, __k, __code);
-	  if (__prev)
-	    {
-	      // Insert after the node before the equivalent one.
-	      __node->_M_nxt = __prev->_M_nxt;
-	      __prev->_M_nxt = __node;
-	      if (__builtin_expect(__prev == __hint, false))
-	      	// hint might be the last bucket node, in this case we need to
-	      	// update next bucket.
-	      	if (__node->_M_nxt
-	      	    && !this->_M_equals(__k, __code, __node->_M_next()))
-	      	  {
-	      	    size_type __next_bkt = _M_bucket_index(__node->_M_next());
-	      	    if (__next_bkt != __bkt)
-	      	      _M_buckets[__next_bkt] = __node;
-	      	  }
-	    }
-	  else
-	    // The inserted node has no equivalent in the
-	    // hashtable. We must insert the new node at the
-	    // beginning of the bucket to preserve equivalent
-	    // elements' relative positions.
-	    _M_insert_bucket_begin(__bkt, __node);
-	  ++_M_element_count;
-	  return iterator(__node);
-	}
-      __catch(...)
+      // Find the node before an equivalent one or use hint if it exists and
+      // if it is equivalent.
+      __node_base* __prev
+	= __builtin_expect(__hint != nullptr, false)
+	  && this->_M_equals(__k, __code, __hint)
+	    ? __hint
+	    : _M_find_before_node(__bkt, __k, __code);
+      if (__prev)
 	{
-	  this->_M_deallocate_node(__node);
-	  __throw_exception_again;
+	  // Insert after the node before the equivalent one.
+	  __node->_M_nxt = __prev->_M_nxt;
+	  __prev->_M_nxt = __node;
+	  if (__builtin_expect(__prev == __hint, false))
+	    // hint might be the last bucket node, in this case we need to
+	    // update next bucket.
+	    if (__node->_M_nxt
+		&& !this->_M_equals(__k, __code, __node->_M_next()))
+	      {
+		size_type __next_bkt = _M_bucket_index(__node->_M_next());
+		if (__next_bkt != __bkt)
+		  _M_buckets[__next_bkt] = __node;
+	      }
 	}
+      else
+	// The inserted node has no equivalent in the hashtable. We must
+	// insert the new node at the beginning of the bucket to preserve
+	// equivalent elements' relative positions.
+	_M_insert_bucket_begin(__bkt, __node);
+      ++_M_element_count;
+      return iterator(__node);
     }
 
   // Insert v if no element with its key is already present.
@@ -1811,12 +1799,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	__hash_code __code = this->_M_hash_code(__k);
 	size_type __bkt = _M_bucket_index(__k, __code);
 
-	__node_type* __n = _M_find_node(__bkt, __k, __code);
-	if (__n)
-	  return std::make_pair(iterator(__n), false);
+	if (__node_type* __node = _M_find_node(__bkt, __k, __code))
+	  return { iterator(__node), false };
 
-	__n = __node_gen(std::forward<_Arg>(__v));
-	return { _M_insert_unique_node(__bkt, __code, __n, __n_elt), true };
+	_Scoped_node __node{ __node_gen(std::forward<_Arg>(__v)), this };
+	auto __pos
+	  = _M_insert_unique_node(__k, __bkt, __code, __node._M_node, __n_elt);
+	__node._M_node = nullptr;
+	return { __pos, true };
       }
 
   // Insert v unconditionally.
@@ -1837,9 +1827,12 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	__hash_code __code = this->_M_hash_code(this->_M_extract()(__v));
 
 	// Second allocate new node so that we don't rehash if it throws.
-	__node_type* __node = __node_gen(std::forward<_Arg>(__v));
-
-	return _M_insert_multi_node(__hint._M_cur, __code, __node);
+	_Scoped_node __node{ __node_gen(std::forward<_Arg>(__v)), this };
+	const key_type& __k = this->_M_extract()(__node._M_node->_M_v());
+	auto __pos
+	  = _M_insert_multi_node(__hint._M_cur, __k, __code, __node._M_node);
+	__node._M_node = nullptr;
+	return __pos;
       }
 
   template<typename _Key, typename _Value,
diff --git a/libstdc++-v3/include/bits/hashtable_policy.h b/libstdc++-v3/include/bits/hashtable_policy.h
index 878154ae210..f5809c7443a 100644
--- a/libstdc++-v3/include/bits/hashtable_policy.h
+++ b/libstdc++-v3/include/bits/hashtable_policy.h
@@ -706,17 +706,19 @@ namespace __detail
       __hashtable* __h = static_cast<__hashtable*>(this);
       __hash_code __code = __h->_M_hash_code(__k);
       std::size_t __bkt = __h->_M_bucket_index(__k, __code);
-      __node_type* __p = __h->_M_find_node(__bkt, __k, __code);
+      if (__node_type* __node = __h->_M_find_node(__bkt, __k, __code))
+	return __node->_M_v().second;
 
-      if (!__p)
-	{
-	  __p = __h->_M_allocate_node(std::piecewise_construct,
-				      std::tuple<const key_type&>(__k),
-				      std::tuple<>());
-	  return __h->_M_insert_unique_node(__bkt, __code, __p)->second;
-	}
-
-      return __p->_M_v().second;
+      typename __hashtable::_Scoped_node __node {
+	__h,
+	std::piecewise_construct,
+	std::tuple<const key_type&>(__k),
+	std::tuple<>()
+      };
+      auto __pos
+	= __h->_M_insert_unique_node(__k, __bkt, __code, __node._M_node);
+      __node._M_node = nullptr;
+      return __pos->second;
     }
 
   template<typename _Key, typename _Pair, typename _Alloc, typename _Equal,
@@ -731,17 +733,19 @@ namespace __detail
       __hashtable* __h = static_cast<__hashtable*>(this);
       __hash_code __code = __h->_M_hash_code(__k);
       std::size_t __bkt = __h->_M_bucket_index(__k, __code);
-      __node_type* __p = __h->_M_find_node(__bkt, __k, __code);
+      if (__node_type* __node = __h->_M_find_node(__bkt, __k, __code))
+	return __node->_M_v().second;
 
-      if (!__p)
-	{
-	  __p = __h->_M_allocate_node(std::piecewise_construct,
-				      std::forward_as_tuple(std::move(__k)),
-				      std::tuple<>());
-	  return __h->_M_insert_unique_node(__bkt, __code, __p)->second;
-	}
-
-      return __p->_M_v().second;
+      typename __hashtable::_Scoped_node __node {
+	__h,
+	std::piecewise_construct,
+	std::forward_as_tuple(std::move(__k)),
+	std::tuple<>()
+      };
+      auto __pos
+	= __h->_M_insert_unique_node(__k, __bkt, __code, __node._M_node);
+      __node._M_node = nullptr;
+      return __pos->second;
     }
 
   template<typename _Key, typename _Pair, typename _Alloc, typename _Equal,
@@ -1972,8 +1976,8 @@ namespace __detail
     }
 
   /**
-   * This type deals with all allocation and keeps an allocator instance through
-   * inheritance to benefit from EBO when possible.
+   * This type deals with all allocation and keeps an allocator instance
+   * through inheritance to benefit from EBO when possible.
    */
   template<typename _NodeAlloc>
     struct _Hashtable_alloc : private _Hashtable_ebo_helper<0, _NodeAlloc>
@@ -2012,17 +2016,21 @@ namespace __detail
       _M_node_allocator() const
       { return __ebo_node_alloc::_M_cget(); }
 
+      // Allocate a node and construct an element within it.
       template<typename... _Args>
 	__node_type*
 	_M_allocate_node(_Args&&... __args);
 
+      // Destroy the element within a node and deallocate the node.
       void
       _M_deallocate_node(__node_type* __n);
 
+      // Deallocate a node.
       void
       _M_deallocate_node_ptr(__node_type* __n);
 
-      // Deallocate the linked list of nodes pointed to by __n
+      // Deallocate the linked list of nodes pointed to by __n.
+      // The elements within the nodes are destroyed.
       void
       _M_deallocate_nodes(__node_type* __n);
 

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]