This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Patch/RFC] tr1::shared_ptr<> removal of lock, choosing thread safety


Hi,

Attached is a patch against the TR1 shared_ptr<> libstdcxx_so_7-branch
that removes the need for the lock in the weak_ptr to shared_ptr
assignment and implements it using an atomic built-in instead.

As a possible solution to the issue of the availability of atomic
operations, I've been discussing the use of an additional boolean
template parameter specifying whether thread safety is needed. This
defaults to false, so existing single-threaded code should compile and
run as usual.

The shared_ptr<T, true> specialisation, however, will only compile if
atomic compare-and-swap is present.

I've also attached a preliminary test case that exercises the changed
section of code using a couple of threads and checking for consistency
of the reference counter. This passes on my SMP x86 system.


I'm not expecting this to be committed yet, I'm mostly after people's
opinions on the following:

1) The template argument approach. If this gets the go-ahead, I plan to
use it in my changes to list<> and the slist<> extension, and in the
other containers, of which I've not created lock-free versions yet. Yes,
it's non-standard, yes, if the standard adds its own template arguments,
we're probably screwed. I don't consider that a likely scenario, though.
What do you think?

2) The test case. Right now, it just tries to provoke potential race
conditions by running the code in questions as possible in each thread.
This is obviously a stochastic approach an may give false positives. I'm
not aware of any other useful tests to perform to check for thread
safety, other than maybe formal verification, which doesn't seem to be
anywhere near mature enough. Tips from the gurus would be appreciated,
as I need similar, but more complex test cases for the lock-free containers.

3) The test case, part 2. As the code in question requires the atomic
built-in functions, I've had to modify the -march flag on x86 by playing
around with the dg-options declarations. They only cover x86 right now,
(x86_64 isn't an issue, I don't know enough about the details of other
architectures in this respect) and my particular way of handling it is
rather hacky. Is there an existing way of making this nicer, and if not,
is that something worth looking into?


Notes:
- Specific to the shared_ptr, I'll probably put back the lock for the
shared_ptr<T, false> specialisation when __GTHREADS is defined, in case
any existing code relies on it being there. (despite the standard or
documentation not saying anything about thread safety)

- An equivalent patch to shared_ptr<> against *trunk* can be found at
http://mulliard.homelinux.org/~phillip/code/soc/patches/trunk_shared_ptr_template_args.patch

- I've signed the FSF copyright assignment form and sent it back, but
it's not arrived yet as far as I know, so my code should not be
committed yet.

Thanks

~phil
Index: testsuite/tr1/2_general_utilities/memory/shared_ptr/thread/lockfree_weaktoshared.cc
===================================================================
--- testsuite/tr1/2_general_utilities/memory/shared_ptr/thread/lockfree_weaktoshared.cc	(revision 0)
+++ testsuite/tr1/2_general_utilities/memory/shared_ptr/thread/lockfree_weaktoshared.cc	(revision 0)
@@ -0,0 +1,109 @@
+// Copyright (C) 2006 Free Software Foundation
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 2, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING.  If not, write to the Free
+// Software Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301,
+// USA.
+
+// TR1 2.2.2 Template class shared_ptr [tr.util.smartptr.shared]
+
+// { dg-do run { target *-*-freebsd* *-*-netbsd* *-*-linux* *-*-solaris* *-*-cygwin *-*-darwin* alpha*-*-osf* } }
+// { dg-options "-march=i586 -pthread" { target { {*-*-freebsd* *-*-netbsd* *-*-linux* alpha*-*-osf*} && { *i686*-*-* *i586*-*-* *athlon*-*-* *pentium4*-*-* *opteron*-*-* *k8*-*-* } } } }
+// { dg-options "-pthread" { target { {*-*-freebsd* *-*-netbsd* *-*-linux* alpha*-*-osf*} && { ! { *i686*-*-* *i586*-*-* *athlon*-*-* *pentium4*-*-* *opteron*-*-* *k8*-*-* } } } } }
+// { dg-options "-pthreads" { target *-*-solaris* } }
+
+// Lock-free compare-and-swap is only available on newer x86 machines.
+
+#include <tr1/memory>
+#include <testsuite_hooks.h>
+
+#include <pthread.h>
+
+#ifdef _GLIBCXX_HAVE_UNISTD_H
+#include <unistd.h>	// To test for _POSIX_THREAD_PRIORITY_SCHEDULING
+#endif
+
+/* This (brute-force) tests the atomicity and thus thread safety of the
+ * shared_ptr <- weak_ptr
+ * assignment operation by allocating a test object, retrieving a weak
+ * reference to it, and letting a number of threads repeatedly create strong
+ * references from the weak reference.
+ * Specifically, this tests the function _Sp_counted_base<true>::add_ref_lock()
+ */
+
+
+const unsigned int HAMMER_MAX_THREADS = 10;
+const unsigned long HAMMER_REPEAT = 100000;
+
+struct A
+  {
+    volatile _Atomic_word counter;
+  };
+
+void* thread_hammer(void* opaque_weak)
+{
+  std::tr1::weak_ptr<A, true>& p_weak = *reinterpret_cast<std::tr1::weak_ptr<A, true>*>(opaque_weak);
+  
+  for (unsigned int i = 0; i < HAMMER_REPEAT; ++i)
+    {
+      std::tr1::shared_ptr<A, true> strong(p_weak);
+      __gnu_cxx::__atomic_add(&strong->counter, 1);
+    }
+  return 0;
+}
+
+int
+test01()
+{
+  bool test __attribute__((unused)) = true;
+  std::tr1::shared_ptr<A, true> obj(new A);
+  obj->counter = 0;
+  // Obtain weak reference.
+  std::tr1::weak_ptr<A, true> weak(obj);
+  
+  // Launch threads with pointer to weak reference.
+  pthread_t threads[HAMMER_MAX_THREADS];
+#if defined(__sun) && defined(__svr4__) && _XOPEN_VERSION >= 500
+  pthread_setconcurrency (NTHREADS);
+#endif
+  
+  pthread_attr_t tattr;
+  int ret = pthread_attr_init (&tattr);
+
+  for (unsigned int worker = 0; worker < HAMMER_MAX_THREADS; worker++)
+    {
+      if (pthread_create(&threads[worker], &tattr,
+			 thread_hammer, reinterpret_cast<void*>(&weak)))
+	abort ();
+    }
+  // Wait for threads to complete, then check integrity of reference.
+  void* status;
+  for (unsigned int worker = 0; worker < HAMMER_MAX_THREADS; worker++)
+    {
+      if (pthread_join(threads[worker], &status))
+	abort ();
+    }
+  
+  VERIFY( obj.use_count() == 1 );
+  VERIFY( obj->counter == HAMMER_REPEAT * HAMMER_MAX_THREADS );
+  
+  return 0;
+}
+
+int 
+main()
+{
+  test01();
+  return 0;
+}
Index: include/tr1/boost_shared_ptr.h
===================================================================
--- include/tr1/boost_shared_ptr.h	(revision 115149)
+++ include/tr1/boost_shared_ptr.h	(working copy)
@@ -93,110 +93,129 @@
   };
 
 
-class _Sp_counted_base
-{
-public:
-
-  _Sp_counted_base()
-  : _M_use_count(1), _M_weak_count(1)
+template <bool _Thread = false>
+  class _Sp_counted_base
   {
-    // For the case of __GTHREAD_MUTEX_INIT we haven't initialised
-    // the mutex yet, so do it now.
-#if defined(__GTHREADS) && defined(__GTHREAD_MUTEX_INIT)
-    __gthread_mutex_t __tmp = __GTHREAD_MUTEX_INIT;
-    _M_mutex = __tmp;
-#endif
-  }
+  public:
+  
+    _Sp_counted_base()
+    : _M_use_count(1), _M_weak_count(1)
+    {
+    }
+  
+    virtual
+    ~_Sp_counted_base() // nothrow
+    { }
+  
+    // dispose() is called when _M_use_count drops to zero, to release
+    // the resources managed by *this.
+    virtual void
+    dispose() = 0; // nothrow
+  
+    // destroy() is called when _M_weak_count drops to zero.
+    virtual void
+    destroy() // nothrow
+    {
+      delete this;
+    }
+  
+    virtual void*
+    get_deleter(const std::type_info&) = 0;
+  
+    void
+    add_ref_copy()
+    {
+      __gnu_cxx::__atomic_add(&_M_use_count, 1);
+    }
+  
+    void
+    add_ref_lock();
+  
+    void
+    release() // nothrow
+    {
+      if (__gnu_cxx::__exchange_and_add(&_M_use_count, -1) == 1)
+        {
+          dispose();
+  #ifdef __GTHREADS	
+          _GLIBCXX_READ_MEM_BARRIER;
+          _GLIBCXX_WRITE_MEM_BARRIER;
+  #endif
+          if (__gnu_cxx::__exchange_and_add(&_M_weak_count, -1) == 1)
+            destroy();
+        }
+    }
+  
+    void
+    weak_add_ref() // nothrow
+    {
+      __gnu_cxx::__atomic_add(&_M_weak_count, 1);
+    }
+  
+    void
+    weak_release() // nothrow
+    {
+      if (__gnu_cxx::__exchange_and_add(&_M_weak_count, -1) == 1)
+        {
+  #ifdef __GTHREADS
+          _GLIBCXX_READ_MEM_BARRIER;
+          _GLIBCXX_WRITE_MEM_BARRIER;
+  #endif
+          destroy();
+        }
+    }
+  
+    long
+    use_count() const // nothrow
+    {
+      return _M_use_count;  // XXX is this MT safe?
+    }
+  
+  private:
+  
+    _Sp_counted_base(_Sp_counted_base const&);
+    _Sp_counted_base& operator=(_Sp_counted_base const&);
+  
+    _Atomic_word _M_use_count;        // #shared
+    _Atomic_word _M_weak_count;       // #weak + (#shared != 0)
+  };
 
-  virtual
-  ~_Sp_counted_base() // nothrow
-  { }
-
-  // dispose() is called when _M_use_count drops to zero, to release
-  // the resources managed by *this.
-  virtual void
-  dispose() = 0; // nothrow
-
-  // destroy() is called when _M_weak_count drops to zero.
-  virtual void
-  destroy() // nothrow
-  {
-    delete this;
-  }
-
-  virtual void*
-  get_deleter(const std::type_info&) = 0;
-
+template<>
+  inline
   void
-  add_ref_copy()
+  _Sp_counted_base<false>::add_ref_lock()
   {
-    __gnu_cxx::__atomic_add(&_M_use_count, 1);
-  }
-
-  void
-  add_ref_lock()
-  {
-    __gnu_cxx::lock lock(_M_mutex);
+    //__gnu_cxx::lock lock(_M_mutex);
     if (__gnu_cxx::__exchange_and_add(&_M_use_count, 1) == 0)
       {
-	_M_use_count = 0;
-	__throw_bad_weak_ptr();
+        _M_use_count = 0;
+        __throw_bad_weak_ptr();
       }
   }
 
+template<> 
+  inline
   void
-  release() // nothrow
+  _Sp_counted_base<true>::add_ref_lock()
   {
-    if (__gnu_cxx::__exchange_and_add(&_M_use_count, -1) == 1)
-      {
-	dispose();
-#ifdef __GTHREADS	
-	_GLIBCXX_READ_MEM_BARRIER;
-	_GLIBCXX_WRITE_MEM_BARRIER;
-#endif
-	if (__gnu_cxx::__exchange_and_add(&_M_weak_count, -1) == 1)
-	  destroy();
-      }
+    // Perform lock-free add-if-not-zero operation.
+    _Atomic_word __count;
+    do
+    {
+      __count = _M_use_count;
+      if (__count == 0)
+        {
+          __throw_bad_weak_ptr();
+        }
+      /* Replace the current counter value with the old value + 1, as long
+       * as it's not changed meanwhile. */
+    }
+    while (!__sync_bool_compare_and_swap(&_M_use_count, __count, __count + 1));
   }
 
-  void
-  weak_add_ref() // nothrow
-  {
-    __gnu_cxx::__atomic_add(&_M_weak_count, 1);
-  }
-
-  void
-  weak_release() // nothrow
-  {
-    if (__gnu_cxx::__exchange_and_add(&_M_weak_count, -1) == 1)
-      {
-#ifdef __GTHREADS
-	_GLIBCXX_READ_MEM_BARRIER;
-	_GLIBCXX_WRITE_MEM_BARRIER;
-#endif
-	destroy();
-      }
-  }
-
-  long
-  use_count() const // nothrow
-  {
-    return _M_use_count;  // XXX is this MT safe?
-  }
-
-private:
-
-  _Sp_counted_base(_Sp_counted_base const&);
-  _Sp_counted_base& operator=(_Sp_counted_base const&);
-
-  _Atomic_word _M_use_count;        // #shared
-  _Atomic_word _M_weak_count;       // #weak + (#shared != 0)
-  __gnu_cxx::mutex_type _M_mutex;
-};
-
-template<typename _Ptr, typename _Deleter>
+template<typename _Ptr, typename _Deleter, bool _Thread>
   class _Sp_counted_base_impl
-  : public _Sp_counted_base
+  : public _Sp_counted_base<_Thread>
   {
   public:
 
@@ -228,205 +247,209 @@
     _Deleter _M_del; // copy constructor must not throw
   };
 
-class weak_count;
+template<bool _Thread = false>
+  class weak_count;
 
-class shared_count
-{
-private:
-
-  _Sp_counted_base* _M_pi;
-
-  friend class weak_count;
-
-public:
-
-  shared_count()
-  : _M_pi(0) // nothrow
-  { }
-
-  template<typename _Ptr, typename _Deleter>
-    shared_count(_Ptr __p, _Deleter __d)
-    : _M_pi(0)
-    {
-      try
-	{
-	  _M_pi = new _Sp_counted_base_impl<_Ptr, _Deleter>(__p, __d);
-	}
-      catch(...)
-	{
-	  __d(__p); // delete __p
-	  __throw_exception_again;
-	}
-    }
-
-  // auto_ptr<_Tp> is special cased to provide the strong guarantee
-
-  template<typename _Tp>
-    explicit shared_count(std::auto_ptr<_Tp>& __r)
-    : _M_pi(new _Sp_counted_base_impl<_Tp*,
-	    _Sp_deleter<_Tp> >(__r.get(), _Sp_deleter<_Tp>()))
-    { __r.release(); }
-
-  // throws bad_weak_ptr when __r.use_count() == 0
-  explicit shared_count(const weak_count& __r);
-
-  ~shared_count() // nothrow
+template<bool _Thread = false>
+  class shared_count
   {
-    if (_M_pi != 0)
-      _M_pi->release();
-  }
-
-  shared_count(const shared_count& __r)
-  : _M_pi(__r._M_pi) // nothrow
-  {
-    if (_M_pi != 0)
-      _M_pi->add_ref_copy();
-  }
-
-  shared_count&
-  operator=(const shared_count& __r) // nothrow
-  {
-    _Sp_counted_base* __tmp = __r._M_pi;
-
-    if(__tmp != _M_pi)
+  private:
+  
+    _Sp_counted_base<_Thread>* _M_pi;
+  
+    friend class weak_count<_Thread>;
+  
+  public:
+  
+    shared_count()
+    : _M_pi(0) // nothrow
+    { }
+  
+    template<typename _Ptr, typename _Deleter>
+      shared_count(_Ptr __p, _Deleter __d)
+      : _M_pi(0)
       {
-	if(__tmp != 0)
-	  __tmp->add_ref_copy();
-	if(_M_pi != 0)
-	  _M_pi->release();
-	_M_pi = __tmp;
+        try
+          {
+            _M_pi = new _Sp_counted_base_impl<_Ptr, _Deleter, _Thread>(__p, __d);
+          }
+        catch(...)
+          {
+            __d(__p); // delete __p
+            __throw_exception_again;
+          }
       }
-    return *this;
-  }
+  
+    // auto_ptr<_Tp> is special cased to provide the strong guarantee
+  
+    template<typename _Tp>
+      explicit shared_count(std::auto_ptr<_Tp>& __r)
+      : _M_pi(new _Sp_counted_base_impl<_Tp*,
+              _Sp_deleter<_Tp>, _Thread >(__r.get(), _Sp_deleter<_Tp>()))
+      { __r.release(); }
+  
+    // throws bad_weak_ptr when __r.use_count() == 0
+    explicit shared_count(const weak_count<_Thread>& __r);
+  
+    ~shared_count() // nothrow
+    {
+      if (_M_pi != 0)
+        _M_pi->release();
+    }
+  
+    shared_count(const shared_count& __r)
+    : _M_pi(__r._M_pi) // nothrow
+    {
+      if (_M_pi != 0)
+        _M_pi->add_ref_copy();
+    }
+  
+    shared_count&
+    operator=(const shared_count& __r) // nothrow
+    {
+      _Sp_counted_base<_Thread>* __tmp = __r._M_pi;
+  
+      if(__tmp != _M_pi)
+        {
+          if(__tmp != 0)
+            __tmp->add_ref_copy();
+          if(_M_pi != 0)
+            _M_pi->release();
+          _M_pi = __tmp;
+        }
+      return *this;
+    }
+  
+    void swap(shared_count& __r) // nothrow
+    {
+      _Sp_counted_base<_Thread>* __tmp = __r._M_pi;
+      __r._M_pi = _M_pi;
+      _M_pi = __tmp;
+    }
+  
+    long
+    use_count() const // nothrow
+    { return _M_pi != 0 ? _M_pi->use_count() : 0; }
+  
+    bool
+    unique() const // nothrow
+    { return this->use_count() == 1; }
+  
+    friend inline bool
+    operator==(const shared_count& __a, const shared_count& __b)
+    { return __a._M_pi == __b._M_pi; }
+  
+    friend inline bool
+    operator<(const shared_count& __a, const shared_count& __b)
+    { return std::less<_Sp_counted_base<_Thread>*>()(__a._M_pi, __b._M_pi); }
+  
+    void*
+    get_deleter(const std::type_info& __ti) const
+    { return _M_pi ? _M_pi->get_deleter(__ti) : 0; }
+  };
 
-  void swap(shared_count& __r) // nothrow
+template<bool _Thread>
+  class weak_count
   {
-    _Sp_counted_base* __tmp = __r._M_pi;
-    __r._M_pi = _M_pi;
-    _M_pi = __tmp;
-  }
+  private:
+  
+    _Sp_counted_base<_Thread>* _M_pi;
+  
+    friend class shared_count<_Thread>;
+  
+  public:
+  
+    weak_count()
+    : _M_pi(0) // nothrow
+    { }
+  
+    weak_count(const shared_count<_Thread>& __r)
+    : _M_pi(__r._M_pi) // nothrow
+    {
+      if (_M_pi != 0)
+        _M_pi->weak_add_ref();
+    }
+  
+    weak_count(const weak_count<_Thread>& __r)
+    : _M_pi(__r._M_pi) // nothrow
+    {
+      if (_M_pi != 0)
+        _M_pi->weak_add_ref();
+    }
+  
+    ~weak_count() // nothrow
+    {
+      if (_M_pi != 0)
+        _M_pi->weak_release();
+    }
+  
+    weak_count<_Thread>&
+    operator=(const shared_count<_Thread>& __r) // nothrow
+    {
+      _Sp_counted_base<_Thread>* __tmp = __r._M_pi;
+      if (__tmp != 0)
+        __tmp->weak_add_ref();
+      if (_M_pi != 0)
+        _M_pi->weak_release();
+      _M_pi = __tmp;
+  
+      return *this;
+    }
+  
+    weak_count<_Thread>&
+    operator=(const weak_count<_Thread>& __r) // nothrow
+    {
+      _Sp_counted_base<_Thread> * __tmp = __r._M_pi;
+      if (__tmp != 0)
+        __tmp->weak_add_ref();
+      if (_M_pi != 0)
+        _M_pi->weak_release();
+      _M_pi = __tmp;
+  
+      return *this;
+    }
+  
+    void
+    swap(weak_count<_Thread>& __r) // nothrow
+    {
+      _Sp_counted_base<_Thread> * __tmp = __r._M_pi;
+      __r._M_pi = _M_pi;
+      _M_pi = __tmp;
+    }
+  
+    long
+    use_count() const // nothrow
+    { return _M_pi != 0 ? _M_pi->use_count() : 0; }
+  
+    friend inline bool
+    operator==(const weak_count<_Thread>& __a, const weak_count<_Thread>& __b)
+    { return __a._M_pi == __b._M_pi; }
+  
+    friend inline bool
+    operator<(const weak_count<_Thread>& __a, const weak_count<_Thread>& __b)
+    { return std::less<_Sp_counted_base<_Thread>*>()(__a._M_pi, __b._M_pi); }
+  };
 
-  long
-  use_count() const // nothrow
-  { return _M_pi != 0 ? _M_pi->use_count() : 0; }
-
-  bool
-  unique() const // nothrow
-  { return this->use_count() == 1; }
-
-  friend inline bool
-  operator==(const shared_count& __a, const shared_count& __b)
-  { return __a._M_pi == __b._M_pi; }
-
-  friend inline bool
-  operator<(const shared_count& __a, const shared_count& __b)
-  { return std::less<_Sp_counted_base*>()(__a._M_pi, __b._M_pi); }
-
-  void*
-  get_deleter(const std::type_info& __ti) const
-  { return _M_pi ? _M_pi->get_deleter(__ti) : 0; }
-};
-
-class weak_count
-{
-private:
-
-  _Sp_counted_base* _M_pi;
-
-  friend class shared_count;
-
-public:
-
-  weak_count()
-  : _M_pi(0) // nothrow
-  { }
-
-  weak_count(const shared_count& __r)
-  : _M_pi(__r._M_pi) // nothrow
+template<bool _Thread>
+  inline
+  shared_count<_Thread>::shared_count(const weak_count<_Thread>& __r)
+  : _M_pi(__r._M_pi)
   {
     if (_M_pi != 0)
-      _M_pi->weak_add_ref();
+      _M_pi->add_ref_lock();
+    else
+      __throw_bad_weak_ptr();
   }
 
-  weak_count(const weak_count& __r)
-  : _M_pi(__r._M_pi) // nothrow
-  {
-    if (_M_pi != 0)
-      _M_pi->weak_add_ref();
-  }
 
-  ~weak_count() // nothrow
-  {
-    if (_M_pi != 0)
-      _M_pi->weak_release();
-  }
-
-  weak_count&
-  operator=(const shared_count& __r) // nothrow
-  {
-    _Sp_counted_base* __tmp = __r._M_pi;
-    if (__tmp != 0)
-      __tmp->weak_add_ref();
-    if (_M_pi != 0)
-      _M_pi->weak_release();
-    _M_pi = __tmp;
-
-    return *this;
-  }
-
-  weak_count&
-  operator=(const weak_count& __r) // nothrow
-  {
-    _Sp_counted_base * __tmp = __r._M_pi;
-    if (__tmp != 0)
-      __tmp->weak_add_ref();
-    if (_M_pi != 0)
-      _M_pi->weak_release();
-    _M_pi = __tmp;
-
-    return *this;
-  }
-
-  void
-  swap(weak_count& __r) // nothrow
-  {
-    _Sp_counted_base * __tmp = __r._M_pi;
-    __r._M_pi = _M_pi;
-    _M_pi = __tmp;
-  }
-
-  long
-  use_count() const // nothrow
-  { return _M_pi != 0 ? _M_pi->use_count() : 0; }
-
-  friend inline bool
-  operator==(const weak_count& __a, const weak_count& __b)
-  { return __a._M_pi == __b._M_pi; }
-
-  friend inline bool
-  operator<(const weak_count& __a, const weak_count& __b)
-  { return std::less<_Sp_counted_base*>()(__a._M_pi, __b._M_pi); }
-};
-
-inline
-shared_count::shared_count(const weak_count& __r)
-: _M_pi(__r._M_pi)
-{
-  if (_M_pi != 0)
-    _M_pi->add_ref_lock();
-  else
-    __throw_bad_weak_ptr();
-}
-
-
 // fwd decls
-template<typename _Tp>
+template<typename _Tp, bool _Thread = false>
   class shared_ptr;
 
-template<typename _Tp>
+template<typename _Tp, bool _Thread = false>
   class weak_ptr;
 
-template<typename _Tp>
+template<typename _Tp, bool _Thread>
   class enable_shared_from_this;
 
 struct __static_cast_tag {};
@@ -458,20 +481,21 @@
 // enable_shared_from_this support
 
 // friend of enable_shared_from_this
-template<typename _Tp1, typename _Tp2>
+template<bool _Thread, typename _Tp1, typename _Tp2>
   void
-  __enable_shared_from_this(const shared_count& __pn,
-                            const enable_shared_from_this<_Tp1>* __pe,
+  __enable_shared_from_this(const shared_count<_Thread>& __pn,
+                            const enable_shared_from_this<_Tp1, _Thread>* __pe,
                             const _Tp2* __px );
 
-inline void
-__enable_shared_from_this(const shared_count&, ...)
-{ }
+template<bool _Thread>
+  inline void
+  __enable_shared_from_this(const shared_count<_Thread>&, ...)
+  { }
 
 
 // get_deleter must be declared before friend declaration by shared_ptr.
-template<typename _Del, typename _Tp>
-  _Del* get_deleter(const shared_ptr<_Tp>&);
+template<typename _Del, typename _Tp, bool _Thread>
+  _Del* get_deleter(const shared_ptr<_Tp, _Thread>&);
 
 /**
  *  @class shared_ptr <tr1/memory>
@@ -480,7 +504,7 @@
  *  The object pointed to is deleted when the last shared_ptr pointing to it
  *  is destroyed or reset.
  */
-template<typename _Tp>
+template<typename _Tp, bool _Thread>
   class shared_ptr
   {
     typedef typename shared_ptr_traits<_Tp>::reference _Reference;
@@ -542,7 +566,7 @@
      *  @throw  std::bad_alloc, in which case 
      */
     template<typename _Tp1>
-      shared_ptr(const shared_ptr<_Tp1>& __r)
+      shared_ptr(const shared_ptr<_Tp1, _Thread>& __r)
       : _M_ptr(__r._M_ptr), _M_refcount(__r._M_refcount) // never throws
       {
         __glibcxx_function_requires(_ConvertibleConcept<_Tp1*, _Tp*>)
@@ -556,7 +580,7 @@
      *          in which case the constructor has no effect.
      */
     template<typename _Tp1>
-      explicit shared_ptr(const weak_ptr<_Tp1>& __r)
+      explicit shared_ptr(const weak_ptr<_Tp1, _Thread>& __r)
       : _M_refcount(__r._M_refcount) // may throw
       {
         __glibcxx_function_requires(_ConvertibleConcept<_Tp1*, _Tp*>)
@@ -575,35 +599,35 @@
         // TODO requires r.release() convertible to _Tp*, Tp1 is complete,
         // delete r.release() well-formed
         _Tp1 * __tmp = __r.get();
-        _M_refcount = shared_count(__r);
+        _M_refcount = shared_count<_Thread>(__r);
 
         __enable_shared_from_this( _M_refcount, __tmp, __tmp );
       }
 
     template<typename _Tp1>
-      shared_ptr(const shared_ptr<_Tp1>& __r, __static_cast_tag)
+      shared_ptr(const shared_ptr<_Tp1, _Thread>& __r, __static_cast_tag)
       : _M_ptr(static_cast<element_type*>(__r._M_ptr)),
 	_M_refcount(__r._M_refcount)
       { }
 
     template<typename _Tp1>
-      shared_ptr(const shared_ptr<_Tp1>& __r, __const_cast_tag)
+      shared_ptr(const shared_ptr<_Tp1, _Thread>& __r, __const_cast_tag)
       : _M_ptr(const_cast<element_type*>(__r._M_ptr)),
 	_M_refcount(__r._M_refcount)
       { }
 
     template<typename _Tp1>
-      shared_ptr(const shared_ptr<_Tp1>& __r, __dynamic_cast_tag)
+      shared_ptr(const shared_ptr<_Tp1, _Thread>& __r, __dynamic_cast_tag)
       : _M_ptr(dynamic_cast<element_type*>(__r._M_ptr)),
 	_M_refcount(__r._M_refcount)
       {
         if (_M_ptr == 0) // need to allocate new counter -- the cast failed
-          _M_refcount = shared_count();
+          _M_refcount = shared_count<_Thread>();
       }
 
     template<typename _Tp1>
       shared_ptr&
-      operator=(const shared_ptr<_Tp1>& __r) // never throws
+      operator=(const shared_ptr<_Tp1, _Thread>& __r) // never throws
       {
         _M_ptr = __r._M_ptr;
         _M_refcount = __r._M_refcount; // shared_count::op= doesn't throw
@@ -672,7 +696,7 @@
     { return _M_refcount.use_count(); }
 
     void
-    swap(shared_ptr<_Tp>& __other) // never throws
+    swap(shared_ptr<_Tp, _Thread>& __other) // never throws
     {
       std::swap(_M_ptr, __other._M_ptr);
       _M_refcount.swap(__other._M_refcount);
@@ -683,41 +707,41 @@
     _M_get_deleter(const std::type_info& __ti) const
     { return _M_refcount.get_deleter(__ti); }
 
-    template<typename _Tp1>
+    template<typename _Tp1, bool _Thread1>
       bool
-      _M_less(const shared_ptr<_Tp1>& __rhs) const
+      _M_less(const shared_ptr<_Tp1, _Thread1>& __rhs) const
       { return _M_refcount < __rhs._M_refcount; }
 
-    template<typename _Tp1> friend class shared_ptr;
-    template<typename _Tp1> friend class weak_ptr;
+    template<typename _Tp1, bool _Thread1> friend class shared_ptr;
+    template<typename _Tp1, bool _Thread1> friend class weak_ptr;
 
-    template<typename _Del, typename _Tp1>
-      friend _Del* get_deleter(const shared_ptr<_Tp1>&);
+    template<typename _Del, typename _Tp1, bool _Thread1>
+      friend _Del* get_deleter(const shared_ptr<_Tp1, _Thread1>&);
 
     // friends injected into enclosing namespace and found by ADL:
     template<typename _Tp1>
       friend inline bool
-      operator==(const shared_ptr& __a, const shared_ptr<_Tp1>& __b)
+      operator==(const shared_ptr& __a, const shared_ptr<_Tp1, _Thread>& __b)
       { return __a.get() == __b.get(); }
 
     template<typename _Tp1>
       friend inline bool
-      operator!=(const shared_ptr& __a, const shared_ptr<_Tp1>& __b)
+      operator!=(const shared_ptr& __a, const shared_ptr<_Tp1, _Thread>& __b)
       { return __a.get() != __b.get(); }
 
     template<typename _Tp1>
       friend inline bool
-      operator<(const shared_ptr& __a, const shared_ptr<_Tp1>& __b)
+      operator<(const shared_ptr& __a, const shared_ptr<_Tp1, _Thread>& __b)
       { return __a._M_less(__b); }
 
     _Tp*         _M_ptr;         // contained pointer
-    shared_count _M_refcount;    // reference counter
+    shared_count<_Thread> _M_refcount;    // reference counter
   };  // shared_ptr
 
 // 2.2.3.8 shared_ptr specialized algorithms.
-template<typename _Tp>
+template<typename _Tp, bool _Thread>
   inline void
-  swap(shared_ptr<_Tp>& __a, shared_ptr<_Tp>& __b)
+  swap(shared_ptr<_Tp, _Thread>& __a, shared_ptr<_Tp, _Thread>& __b)
   { __a.swap(__b); }
 
 // 2.2.3.9 shared_ptr casts
@@ -726,11 +750,11 @@
  *           will eventually result in undefined behaviour,
  *           attempting to delete the same object twice.
  */
-template<typename _Tp, typename _Tp1>
-  shared_ptr<_Tp>
-  static_pointer_cast(const shared_ptr<_Tp1>& __r)
+template<typename _Tp, typename _Tp1, bool _Thread>
+  shared_ptr<_Tp, _Thread>
+  static_pointer_cast(const shared_ptr<_Tp1, _Thread>& __r)
   {
-    return shared_ptr<_Tp>(__r, __static_cast_tag());
+    return shared_ptr<_Tp, _Thread>(__r, __static_cast_tag());
   }
 
 /** @warning The seemingly equivalent
@@ -738,11 +762,11 @@
  *           will eventually result in undefined behaviour,
  *           attempting to delete the same object twice.
  */
-template<typename _Tp, typename _Tp1>
-  shared_ptr<_Tp>
-  const_pointer_cast(const shared_ptr<_Tp1>& __r)
+template<typename _Tp, typename _Tp1, bool _Thread>
+  shared_ptr<_Tp, _Thread>
+  const_pointer_cast(const shared_ptr<_Tp1, _Thread>& __r)
   {
-    return shared_ptr<_Tp>(__r, __const_cast_tag());
+    return shared_ptr<_Tp, _Thread>(__r, __const_cast_tag());
   }
 
 /** @warning The seemingly equivalent
@@ -750,30 +774,30 @@
  *           will eventually result in undefined behaviour,
  *           attempting to delete the same object twice.
  */
-template<typename _Tp, typename _Tp1>
-  shared_ptr<_Tp>
-  dynamic_pointer_cast(const shared_ptr<_Tp1>& __r)
+template<typename _Tp, typename _Tp1, bool _Thread>
+  shared_ptr<_Tp, _Thread>
+  dynamic_pointer_cast(const shared_ptr<_Tp1, _Thread>& __r)
   {
-    return shared_ptr<_Tp>(__r, __dynamic_cast_tag());
+    return shared_ptr<_Tp, _Thread>(__r, __dynamic_cast_tag());
   }
 
 // 2.2.3.7 shared_ptr I/O
-template<typename _Ch, typename _Tr, typename _Tp>
+template<typename _Ch, typename _Tr, typename _Tp, bool _Thread>
   std::basic_ostream<_Ch, _Tr>&
-  operator<<(std::basic_ostream<_Ch, _Tr>& __os, const shared_ptr<_Tp>& __p)
+  operator<<(std::basic_ostream<_Ch, _Tr>& __os, const shared_ptr<_Tp, _Thread>& __p)
   {
     __os << __p.get();
     return __os;
   }
 
 // 2.2.3.10 shared_ptr get_deleter (experimental)
-template<typename _Del, typename _Tp>
+template<typename _Del, typename _Tp, bool _Thread>
   inline _Del*
-  get_deleter(const shared_ptr<_Tp>& __p)
+  get_deleter(const shared_ptr<_Tp, _Thread>& __p)
   { return static_cast<_Del*>(__p._M_get_deleter(typeid(_Del))); }
 
 
-template<typename _Tp>
+template<typename _Tp, bool _Thread>
   class weak_ptr
   {
   public:
@@ -804,7 +828,7 @@
     //
 
     template<typename _Tp1>
-      weak_ptr(const weak_ptr<_Tp1>& r)
+      weak_ptr(const weak_ptr<_Tp1, _Thread>& r)
       : _M_refcount(r._M_refcount) // never throws
       {
         __glibcxx_function_requires(_ConvertibleConcept<_Tp1*, _Tp*>)
@@ -812,7 +836,7 @@
       }
 
     template<typename _Tp1>
-      weak_ptr(const shared_ptr<_Tp1>& r)
+      weak_ptr(const shared_ptr<_Tp1, _Thread>& r)
       : _M_ptr(r._M_ptr), _M_refcount(r._M_refcount) // never throws
       {
         __glibcxx_function_requires(_ConvertibleConcept<_Tp1*, _Tp*>)
@@ -820,7 +844,7 @@
 
     template<typename _Tp1>
       weak_ptr&
-      operator=(const weak_ptr<_Tp1>& r) // never throws
+      operator=(const weak_ptr<_Tp1, _Thread>& r) // never throws
       {
         _M_ptr = r.lock().get();
         _M_refcount = r._M_refcount;
@@ -829,25 +853,25 @@
 
     template<typename _Tp1>
       weak_ptr&
-      operator=(const shared_ptr<_Tp1>& r) // never throws
+      operator=(const shared_ptr<_Tp1, _Thread>& r) // never throws
       {
         _M_ptr = r._M_ptr;
         _M_refcount = r._M_refcount;
         return *this;
       }
 
-    shared_ptr<_Tp>
+    shared_ptr<_Tp, _Thread>
     lock() const // never throws
     {
 #ifdef __GTHREADS
 
       // optimization: avoid throw overhead
       if (expired())
-	return shared_ptr<element_type>();
+	return shared_ptr<element_type, _Thread>();
       
       try
 	{
-	  return shared_ptr<element_type>(*this);
+	  return shared_ptr<element_type, _Thread>(*this);
 	}
       catch (const bad_weak_ptr&)
 	{
@@ -860,8 +884,8 @@
 #else
 
       // optimization: avoid try/catch overhead when single threaded
-      return expired() ? shared_ptr<element_type>()
-	               : shared_ptr<element_type>(*this);
+      return expired() ? shared_ptr<element_type, _Thread>()
+	               : shared_ptr<element_type, _Thread>(*this);
 
 #endif
     } // XXX MT
@@ -889,12 +913,12 @@
 
     template<typename _Tp1>
       bool
-      _M_less(const weak_ptr<_Tp1>& __rhs) const
+      _M_less(const weak_ptr<_Tp1, _Thread>& __rhs) const
       { return _M_refcount < __rhs._M_refcount; }
 
     // used by __enable_shared_from_this
     void
-    _M_assign(_Tp* __ptr, const shared_count& __refcount)
+    _M_assign(_Tp* __ptr, const shared_count<_Thread>& __refcount)
     {
       _M_ptr = __ptr;
       _M_refcount = __refcount;
@@ -904,26 +928,26 @@
 
     template<typename _Tp1>
       friend inline bool
-      operator<(const weak_ptr& __lhs, const weak_ptr<_Tp1>& __rhs)
+      operator<(const weak_ptr& __lhs, const weak_ptr<_Tp1, _Thread>& __rhs)
       { return __lhs._M_less(__rhs); }
 
-    template<typename _Tp1> friend class weak_ptr;
-    template<typename _Tp1> friend class shared_ptr;
-    friend class enable_shared_from_this<_Tp>;
+    template<typename _Tp1, bool _Thread1> friend class weak_ptr;
+    template<typename _Tp1, bool _Thread1> friend class shared_ptr;
+    friend class enable_shared_from_this<_Tp, _Thread>;
 
     _Tp*       _M_ptr;           // contained pointer
-    weak_count _M_refcount;      // reference counter
+    weak_count<_Thread> _M_refcount;      // reference counter
 
   };  // weak_ptr
 
 // 2.2.4.7 weak_ptr specialized algorithms.
-template<typename _Tp>
+template<typename _Tp, bool _Thread>
   void
-  swap(weak_ptr<_Tp>& __a, weak_ptr<_Tp>& __b)
+  swap(weak_ptr<_Tp, _Thread>& __a, weak_ptr<_Tp, _Thread>& __b)
   { __a.swap(__b); }
 
 
-template<typename _Tp>
+template<typename _Tp, bool _Thread = false>
   class enable_shared_from_this
   {
   protected:
@@ -943,29 +967,29 @@
 
   public:
 
-    shared_ptr<_Tp>
+    shared_ptr<_Tp, _Thread>
     shared_from_this()
     {
-      shared_ptr<_Tp> __p(this->_M_weak_this);
+      shared_ptr<_Tp, _Thread> __p(this->_M_weak_this);
       return __p;
     }
 
-    shared_ptr<const _Tp>
+    shared_ptr<const _Tp, _Thread>
     shared_from_this() const
     {
-      shared_ptr<const _Tp> __p(this->_M_weak_this);
+      shared_ptr<const _Tp, _Thread> __p(this->_M_weak_this);
       return __p;
     }
 
   private:
     template<typename _Tp1>
       void
-      _M_weak_assign(_Tp1* __p, const shared_count& __n) const
+      _M_weak_assign(_Tp1* __p, const shared_count<_Thread>& __n) const
       { _M_weak_this._M_assign(__p, __n); }
 
     template<typename _Tp1>
       friend void
-      __enable_shared_from_this(const shared_count& __pn,
+      __enable_shared_from_this(const shared_count<_Thread>& __pn,
 				const enable_shared_from_this* __pe,
 				const _Tp1* __px)
       {
@@ -973,7 +997,7 @@
           __pe->_M_weak_assign(const_cast<_Tp1*>(__px), __pn);
       }
 
-    mutable weak_ptr<_Tp> _M_weak_this;
+    mutable weak_ptr<_Tp, _Thread> _M_weak_this;
   };
 
 _GLIBCXX_END_NAMESPACE

Attachment: signature.asc
Description: OpenPGP digital signature


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]