public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* cxx-mem-model merge [6 of 9] - libstdc++-v3
@ 2011-11-03 23:52 Andrew MacLeod
  2011-11-04 18:17 ` Jeff Law
  2011-11-07  0:54 ` Hans-Peter Nilsson
  0 siblings, 2 replies; 22+ messages in thread
From: Andrew MacLeod @ 2011-11-03 23:52 UTC (permalink / raw)
  To: gcc-patches

[-- Attachment #1: Type: text/plain, Size: 642 bytes --]

These are the changes to libstdc++ to make use of the new atomics.  I 
changed the files to use the new atomics, and bkoz did a shuffling of 
the include file layout to better suit the new c++ approach.

previously, libstdc++ provided a locked implementation in atomic_0.h 
with the theory that eventually it would be used.  The new scheme 
involves leaving non-lock-free implementations to an external library. 
This involved removing the old lock implementation and restructuring 
things now that multiple implementation dont have to be supported.   SO 
a lot fo this is churn... 2 include files deleted and one merged into 
another one..


[-- Attachment #2: libstdc++v3.diff --]
[-- Type: text/plain, Size: 94194 bytes --]

2011-11-02  Andrew MacLeod  <amacleod@redhat.com>

	* include/std/atomic (is_lock_free): Add object pointer to 
	__atomic_is_lock_free.
	* include/bits/atomic_base.h (LOCKFREE_PROP): Add 0 for object ptr.
	(is_lock_free): Add object pointer to __atomic_is_lock_free.

2011-10-27  Benjamin Kosnik  <bkoz@redhat.com>
	    Andrew MacLeod  <amacleod@redhat.com>

	* include/Makefile.am (bits_headers): Remove atomic_0.h, atomic_2.h.
	* include/Makefile.in: Regenerate.
	* src/Makefile.am (sources): Rename atomic.cc to
	compatibility-atomic-c++0x.cc.
	* src/Makefile.in: Regenerate.
	* include/bits/atomic_0.h: Remove.
	* include/bits/atomic_2.h: Incorporate into...
	* include/bits/atomic_base.h: ...this.
	* include/std/atomic: Add generic atomic calls to basic atomic class.
	* src/atomic.cc: Move...
	* src/compatibility-atomic-c++0x.cc: ...here.
	* src/compatibility-c++0x.cc: Tweak.
	* testsuite/29_atomics/atomic/cons/user_pod.cc: Fix.
	* testsuite/29_atomics/atomic/requirements/explicit_instantiation/1.cc:
	  Same.
	* testsuite/29_atomics/headers/atomic/macros.cc: Same.

2011-10-25  Andrew MacLeod  <amacleod@redhat.com>

	* include/bits/atomic_2.h: Rename __atomic_exchange, __atomic_load,
	__atomic_store, and __atomic_compare_exchange to '_n' variant.

2011-10-20  Andrew MacLeod  <amacleod@redhat.com>

	* include/bits/atomic_2.h: Use __atomic_compare_exchange.

2011-10-17  Andrew MacLeod  <amacleod@redhat.com>

	* include/bits/atomic_2.h: Rename __sync_mem to __atomic.

2011-09-16  Andrew MacLeod  <amacleod@redhat.com>

	* include/bits/atomic_2.h (__atomic2): Use new
	__sync_mem routines.


Index: src/atomic.cc
===================================================================
*** src/atomic.cc	(.../trunk/libstdc++-v3)	(revision 180780)
--- src/atomic.cc	(.../branches/cxx-mem-model/libstdc++-v3)	(revision 180832)
***************
*** 1,146 ****
- // Support for atomic operations -*- C++ -*-
- 
- // Copyright (C) 2008, 2009, 2010, 2011
- // Free Software Foundation, Inc.
- //
- // This file is part of the GNU ISO C++ Library.  This library is free
- // software; you can redistribute it and/or modify it under the
- // terms of the GNU General Public License as published by the
- // Free Software Foundation; either version 3, or (at your option)
- // any later version.
- 
- // This library is distributed in the hope that it will be useful,
- // but WITHOUT ANY WARRANTY; without even the implied warranty of
- // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- // GNU General Public License for more details.
- 
- // Under Section 7 of GPL version 3, you are granted additional
- // permissions described in the GCC Runtime Library Exception, version
- // 3.1, as published by the Free Software Foundation.
- 
- // You should have received a copy of the GNU General Public License and
- // a copy of the GCC Runtime Library Exception along with this program;
- // see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
- // <http://www.gnu.org/licenses/>.
- 
- #include "gstdint.h"
- #include <atomic>
- #include <mutex>
- 
- #define LOGSIZE 4
- 
- namespace
- {
- #if defined(_GLIBCXX_HAS_GTHREADS) && defined(_GLIBCXX_USE_C99_STDINT_TR1)
-   std::mutex&
-   get_atomic_mutex()
-   {
-     static std::mutex atomic_mutex;
-     return atomic_mutex;
-   }
- #endif
- 
-   std::__atomic_flag_base flag_table[ 1 << LOGSIZE ] =
-     {
-       ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT,
-       ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT,
-       ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT,
-       ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT,
-     };
- } // anonymous namespace
- 
- namespace std _GLIBCXX_VISIBILITY(default)
- {
- _GLIBCXX_BEGIN_NAMESPACE_VERSION
- 
-   namespace __atomic0
-   {
-     bool
-     atomic_flag::test_and_set(memory_order) noexcept
-     {
- #if defined(_GLIBCXX_HAS_GTHREADS) && defined(_GLIBCXX_USE_C99_STDINT_TR1)
-       lock_guard<mutex> __lock(get_atomic_mutex());
- #endif
-       bool result = _M_i;
-       _M_i = true;
-       return result;
-     }
- 
-     void
-     atomic_flag::clear(memory_order) noexcept
-     {
- #if defined(_GLIBCXX_HAS_GTHREADS) && defined(_GLIBCXX_USE_C99_STDINT_TR1)
-       lock_guard<mutex> __lock(get_atomic_mutex());
- #endif
-       _M_i = false;
-     }
- 
-   _GLIBCXX_BEGIN_EXTERN_C
- 
-   bool
-   atomic_flag_test_and_set_explicit(__atomic_flag_base* __a,
- 				    memory_order __m) _GLIBCXX_NOTHROW
-   {
-     atomic_flag* d = static_cast<atomic_flag*>(__a);
-     return d->test_and_set(__m);
-   }
- 
-   void
-   atomic_flag_clear_explicit(__atomic_flag_base* __a,
- 			     memory_order __m) _GLIBCXX_NOTHROW
-   {
-     atomic_flag* d = static_cast<atomic_flag*>(__a);
-     return d->clear(__m);
-   }
- 
-   void
-   __atomic_flag_wait_explicit(__atomic_flag_base* __a,
- 			      memory_order __x) _GLIBCXX_NOTHROW
-   {
-     while (atomic_flag_test_and_set_explicit(__a, __x))
-       { };
-   }
- 
-   _GLIBCXX_CONST __atomic_flag_base*
-   __atomic_flag_for_address(const volatile void* __z) _GLIBCXX_NOTHROW
-   {
-     uintptr_t __u = reinterpret_cast<uintptr_t>(__z);
-     __u += (__u >> 2) + (__u << 4);
-     __u += (__u >> 7) + (__u << 5);
-     __u += (__u >> 17) + (__u << 13);
-     if (sizeof(uintptr_t) > 4)
-       __u += (__u >> 31);
-     __u &= ~((~uintptr_t(0)) << LOGSIZE);
-     return flag_table + __u;
-   }
- 
-   _GLIBCXX_END_EXTERN_C
- 
-   } // namespace __atomic0
- 
- _GLIBCXX_END_NAMESPACE_VERSION
- } // namespace
- 
- 
- // XXX GLIBCXX_ABI Deprecated
- // gcc-4.5.0
- // <atomic> signature changes
- 
- // The rename syntax for default exported names is
- //   asm (".symver name1,exportedname@GLIBCXX_3.4")
- //   asm (".symver name2,exportedname@@GLIBCXX_3.4.5")
- // In the future, GLIBCXX_ABI > 6 should remove all uses of
- // _GLIBCXX_*_SYMVER macros in this file.
- 
- #if defined(_GLIBCXX_SYMVER_GNU) && defined(PIC) \
-     && defined(_GLIBCXX_HAVE_AS_SYMVER_DIRECTIVE) \
-     && defined(_GLIBCXX_HAVE_SYMVER_SYMBOL_RENAMING_RUNTIME_SUPPORT)
- 
- #define _GLIBCXX_ASM_SYMVER(cur, old, version) \
-    asm (".symver " #cur "," #old "@@" #version);
- 
- _GLIBCXX_ASM_SYMVER(_ZNSt9__atomic011atomic_flag5clearESt12memory_order, _ZNVSt9__atomic011atomic_flag5clearESt12memory_order, GLIBCXX_3.4.11)
- 
- _GLIBCXX_ASM_SYMVER(_ZNSt9__atomic011atomic_flag12test_and_setESt12memory_order, _ZNVSt9__atomic011atomic_flag12test_and_setESt12memory_order, GLIBCXX_3.4.11)
- 
- #endif
--- 0 ----
Index: src/Makefile.in
===================================================================
*** src/Makefile.in	(.../trunk/libstdc++-v3)	(revision 180780)
--- src/Makefile.in	(.../branches/cxx-mem-model/libstdc++-v3)	(revision 180832)
*************** am__objects_1 = atomicity.lo codecvt_mem
*** 105,113 ****
  @ENABLE_PARALLEL_TRUE@	compatibility-parallel_list-2.lo
  am__objects_5 = basic_file.lo c++locale.lo $(am__objects_2) \
  	$(am__objects_3) $(am__objects_4)
! am__objects_6 = atomic.lo bitmap_allocator.lo pool_allocator.lo \
! 	mt_allocator.lo codecvt.lo compatibility.lo \
! 	compatibility-c++0x.lo compatibility-debug_list.lo \
  	compatibility-debug_list-2.lo compatibility-list.lo \
  	compatibility-list-2.lo complex_io.lo ctype.lo debug.lo \
  	functexcept.lo functional.lo globals_io.lo hash_c++0x.lo \
--- 105,113 ----
  @ENABLE_PARALLEL_TRUE@	compatibility-parallel_list-2.lo
  am__objects_5 = basic_file.lo c++locale.lo $(am__objects_2) \
  	$(am__objects_3) $(am__objects_4)
! am__objects_6 = bitmap_allocator.lo pool_allocator.lo mt_allocator.lo \
! 	codecvt.lo compatibility.lo compatibility-c++0x.lo \
! 	compatibility-atomic-c++0x.lo compatibility-debug_list.lo \
  	compatibility-debug_list-2.lo compatibility-list.lo \
  	compatibility-list-2.lo complex_io.lo ctype.lo debug.lo \
  	functexcept.lo functional.lo globals_io.lo hash_c++0x.lo \
*************** host_sources_extra = \
*** 407,419 ****
  
  # Sources present in the src directory, always present.
  sources = \
- 	atomic.cc \
  	bitmap_allocator.cc \
  	pool_allocator.cc \
  	mt_allocator.cc \
  	codecvt.cc \
  	compatibility.cc \
  	compatibility-c++0x.cc \
  	compatibility-debug_list.cc \
  	compatibility-debug_list-2.cc \
  	compatibility-list.cc \
--- 407,419 ----
  
  # Sources present in the src directory, always present.
  sources = \
  	bitmap_allocator.cc \
  	pool_allocator.cc \
  	mt_allocator.cc \
  	codecvt.cc \
  	compatibility.cc \
  	compatibility-c++0x.cc \
+ 	compatibility-atomic-c++0x.cc \
  	compatibility-debug_list.cc \
  	compatibility-debug_list-2.cc \
  	compatibility-list.cc \
*************** compatibility-c++0x.lo: compatibility-c+
*** 917,922 ****
--- 917,927 ----
  compatibility-c++0x.o: compatibility-c++0x.cc
  	$(CXXCOMPILE) -std=gnu++0x -c $<
  
+ compatibility-atomic-c++0x.lo: compatibility-atomic-c++0x.cc
+ 	$(LTCXXCOMPILE) -std=gnu++0x -c $<
+ compatibility-atomic-c++0x.o: compatibility-atomic-c++0x.cc
+ 	$(CXXCOMPILE) -std=gnu++0x -c $<
+ 
  functional.lo: functional.cc
  	$(LTCXXCOMPILE) -std=gnu++0x -c $<
  functional.o: functional.cc
*************** limits.lo: limits.cc
*** 937,947 ****
  limits.o: limits.cc
  	$(CXXCOMPILE) -std=gnu++0x -c $<
  
- atomic.lo: atomic.cc
- 	$(LTCXXCOMPILE) -std=gnu++0x -c $<
- atomic.o: atomic.cc
- 	$(CXXCOMPILE) -std=gnu++0x -c $<
- 
  fstream-inst.lo: fstream-inst.cc
  	$(LTCXXCOMPILE) -std=gnu++0x -c $<
  fstream-inst.o: fstream-inst.cc
--- 942,947 ----
Index: src/compatibility-atomic-c++0x.cc
===================================================================
*** src/compatibility-atomic-c++0x.cc	(.../trunk/libstdc++-v3)	(revision 0)
--- src/compatibility-atomic-c++0x.cc	(.../branches/cxx-mem-model/libstdc++-v3)	(revision 180832)
***************
*** 0 ****
--- 1,158 ----
+ // <atomic> compatibility -*- C++ -*-
+ 
+ // Copyright (C) 2008, 2009, 2010, 2011
+ // Free Software Foundation, Inc.
+ //
+ // This file is part of the GNU ISO C++ Library.  This library is free
+ // software; you can redistribute it and/or modify it under the
+ // terms of the GNU General Public License as published by the
+ // Free Software Foundation; either version 3, or (at your option)
+ // any later version.
+ 
+ // This library is distributed in the hope that it will be useful,
+ // but WITHOUT ANY WARRANTY; without even the implied warranty of
+ // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ // GNU General Public License for more details.
+ 
+ // Under Section 7 of GPL version 3, you are granted additional
+ // permissions described in the GCC Runtime Library Exception, version
+ // 3.1, as published by the Free Software Foundation.
+ 
+ // You should have received a copy of the GNU General Public License and
+ // a copy of the GCC Runtime Library Exception along with this program;
+ // see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+ // <http://www.gnu.org/licenses/>.
+ 
+ #include "gstdint.h"
+ #include <atomic>
+ #include <mutex>
+ 
+ // XXX GLIBCXX_ABI Deprecated
+ // gcc-4.7.0
+ 
+ #define LOGSIZE 4
+ 
+ namespace
+ {
+ #if defined(_GLIBCXX_HAS_GTHREADS) && defined(_GLIBCXX_USE_C99_STDINT_TR1)
+   std::mutex&
+   get_atomic_mutex()
+   {
+     static std::mutex atomic_mutex;
+     return atomic_mutex;
+   }
+ #endif
+ 
+   std::__atomic_flag_base flag_table[ 1 << LOGSIZE ] =
+     {
+       ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT,
+       ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT,
+       ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT,
+       ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT, ATOMIC_FLAG_INIT,
+     };
+ } // anonymous namespace
+ 
+ namespace std _GLIBCXX_VISIBILITY(default)
+ {
+ _GLIBCXX_BEGIN_NAMESPACE_VERSION
+ 
+   namespace __atomic0
+   {
+ 
+     struct atomic_flag : public __atomic_flag_base
+     {
+      bool
+      test_and_set(memory_order) noexcept;
+      
+      void
+      clear(memory_order) noexcept;
+     };
+ 
+     bool
+     atomic_flag::test_and_set(memory_order) noexcept
+     {
+ #if defined(_GLIBCXX_HAS_GTHREADS) && defined(_GLIBCXX_USE_C99_STDINT_TR1)
+       lock_guard<mutex> __lock(get_atomic_mutex());
+ #endif
+       bool result = _M_i;
+       _M_i = true;
+       return result;
+     }
+ 
+     void
+     atomic_flag::clear(memory_order) noexcept
+     {
+ #if defined(_GLIBCXX_HAS_GTHREADS) && defined(_GLIBCXX_USE_C99_STDINT_TR1)
+       lock_guard<mutex> __lock(get_atomic_mutex());
+ #endif
+       _M_i = false;
+     }
+   } // namespace __atomic0
+ 
+   _GLIBCXX_BEGIN_EXTERN_C
+ 
+   bool
+   atomic_flag_test_and_set_explicit(__atomic_flag_base* __a,
+ 				    memory_order __m) _GLIBCXX_NOTHROW
+   {
+     atomic_flag* d = static_cast<atomic_flag*>(__a);
+     return d->test_and_set(__m);
+   }
+ 
+   void
+   atomic_flag_clear_explicit(__atomic_flag_base* __a,
+ 			     memory_order __m) _GLIBCXX_NOTHROW
+   {
+     atomic_flag* d = static_cast<atomic_flag*>(__a);
+     return d->clear(__m);
+   }
+ 
+   void
+   __atomic_flag_wait_explicit(__atomic_flag_base* __a,
+ 			      memory_order __x) _GLIBCXX_NOTHROW
+   {
+     while (atomic_flag_test_and_set_explicit(__a, __x))
+       { };
+   }
+ 
+   _GLIBCXX_CONST __atomic_flag_base*
+   __atomic_flag_for_address(const volatile void* __z) _GLIBCXX_NOTHROW
+   {
+     uintptr_t __u = reinterpret_cast<uintptr_t>(__z);
+     __u += (__u >> 2) + (__u << 4);
+     __u += (__u >> 7) + (__u << 5);
+     __u += (__u >> 17) + (__u << 13);
+     if (sizeof(uintptr_t) > 4)
+       __u += (__u >> 31);
+     __u &= ~((~uintptr_t(0)) << LOGSIZE);
+     return flag_table + __u;
+   }
+ 
+   _GLIBCXX_END_EXTERN_C
+ 
+ _GLIBCXX_END_NAMESPACE_VERSION
+ } // namespace std
+ 
+ 
+ // XXX GLIBCXX_ABI Deprecated
+ // gcc-4.5.0
+ // <atomic> signature changes
+ 
+ // The rename syntax for default exported names is
+ //   asm (".symver name1,exportedname@GLIBCXX_3.4")
+ //   asm (".symver name2,exportedname@@GLIBCXX_3.4.5")
+ // In the future, GLIBCXX_ABI > 6 should remove all uses of
+ // _GLIBCXX_*_SYMVER macros in this file.
+ 
+ #if defined(_GLIBCXX_SYMVER_GNU) && defined(PIC) \
+     && defined(_GLIBCXX_HAVE_AS_SYMVER_DIRECTIVE) \
+     && defined(_GLIBCXX_HAVE_SYMVER_SYMBOL_RENAMING_RUNTIME_SUPPORT)
+ 
+ #define _GLIBCXX_ASM_SYMVER(cur, old, version) \
+    asm (".symver " #cur "," #old "@@" #version);
+ 
+ _GLIBCXX_ASM_SYMVER(_ZNSt9__atomic011atomic_flag5clearESt12memory_order, _ZNVSt9__atomic011atomic_flag5clearESt12memory_order, GLIBCXX_3.4.11)
+ 
+ _GLIBCXX_ASM_SYMVER(_ZNSt9__atomic011atomic_flag12test_and_setESt12memory_order, _ZNVSt9__atomic011atomic_flag12test_and_setESt12memory_order, GLIBCXX_3.4.11)
+ 
+ #endif
Index: src/Makefile.am
===================================================================
*** src/Makefile.am	(.../trunk/libstdc++-v3)	(revision 180780)
--- src/Makefile.am	(.../branches/cxx-mem-model/libstdc++-v3)	(revision 180832)
*************** endif
*** 190,202 ****
  
  # Sources present in the src directory, always present.
  sources = \
- 	atomic.cc \
  	bitmap_allocator.cc \
  	pool_allocator.cc \
  	mt_allocator.cc \
  	codecvt.cc \
  	compatibility.cc \
  	compatibility-c++0x.cc \
  	compatibility-debug_list.cc \
  	compatibility-debug_list-2.cc \
  	compatibility-list.cc \
--- 190,202 ----
  
  # Sources present in the src directory, always present.
  sources = \
  	bitmap_allocator.cc \
  	pool_allocator.cc \
  	mt_allocator.cc \
  	codecvt.cc \
  	compatibility.cc \
  	compatibility-c++0x.cc \
+ 	compatibility-atomic-c++0x.cc \
  	compatibility-debug_list.cc \
  	compatibility-debug_list-2.cc \
  	compatibility-list.cc \
*************** compatibility-c++0x.lo: compatibility-c+
*** 323,328 ****
--- 323,333 ----
  compatibility-c++0x.o: compatibility-c++0x.cc
  	$(CXXCOMPILE) -std=gnu++0x -c $<
  
+ compatibility-atomic-c++0x.lo: compatibility-atomic-c++0x.cc
+ 	$(LTCXXCOMPILE) -std=gnu++0x -c $<
+ compatibility-atomic-c++0x.o: compatibility-atomic-c++0x.cc
+ 	$(CXXCOMPILE) -std=gnu++0x -c $<
+ 
  functional.lo: functional.cc
  	$(LTCXXCOMPILE) -std=gnu++0x -c $<
  functional.o: functional.cc
*************** limits.lo: limits.cc
*** 343,353 ****
  limits.o: limits.cc
  	$(CXXCOMPILE) -std=gnu++0x -c $<
  
- atomic.lo: atomic.cc
- 	$(LTCXXCOMPILE) -std=gnu++0x -c $<
- atomic.o: atomic.cc
- 	$(CXXCOMPILE) -std=gnu++0x -c $<
- 
  fstream-inst.lo: fstream-inst.cc
  	$(LTCXXCOMPILE) -std=gnu++0x -c $<
  fstream-inst.o: fstream-inst.cc
--- 348,353 ----
Index: include/Makefile.in
===================================================================
*** include/Makefile.in	(.../trunk/libstdc++-v3)	(revision 180780)
--- include/Makefile.in	(.../branches/cxx-mem-model/libstdc++-v3)	(revision 180832)
*************** bits_headers = \
*** 335,342 ****
  	${bits_srcdir}/alloc_traits.h \
  	${bits_srcdir}/allocator.h \
  	${bits_srcdir}/atomic_base.h \
- 	${bits_srcdir}/atomic_0.h \
- 	${bits_srcdir}/atomic_2.h \
  	${bits_srcdir}/basic_ios.h \
  	${bits_srcdir}/basic_ios.tcc \
  	${bits_srcdir}/basic_string.h \
--- 335,340 ----
Index: include/std/atomic
===================================================================
*** include/std/atomic	(.../trunk/libstdc++-v3)	(revision 180780)
--- include/std/atomic	(.../branches/cxx-mem-model/libstdc++-v3)	(revision 180832)
***************
*** 39,46 ****
  #endif
  
  #include <bits/atomic_base.h>
- #include <bits/atomic_0.h>
- #include <bits/atomic_2.h>
  
  namespace std _GLIBCXX_VISIBILITY(default)
  {
--- 39,44 ----
*************** _GLIBCXX_BEGIN_NAMESPACE_VERSION
*** 167,235 ****
  
        constexpr atomic(_Tp __i) noexcept : _M_i(__i) { }
  
!       operator _Tp() const noexcept;
  
!       operator _Tp() const volatile noexcept;
  
        _Tp
!       operator=(_Tp __i) noexcept { store(__i); return __i; }
  
        _Tp
!       operator=(_Tp __i) volatile noexcept { store(__i); return __i; }
  
        bool
!       is_lock_free() const noexcept;
  
        bool
!       is_lock_free() const volatile noexcept;
  
        void
!       store(_Tp, memory_order = memory_order_seq_cst) noexcept;
  
        void
!       store(_Tp, memory_order = memory_order_seq_cst) volatile noexcept;
  
        _Tp
!       load(memory_order = memory_order_seq_cst) const noexcept;
  
        _Tp
!       load(memory_order = memory_order_seq_cst) const volatile noexcept;
  
        _Tp
!       exchange(_Tp __i, memory_order = memory_order_seq_cst) noexcept;
  
        _Tp
!       exchange(_Tp __i, memory_order = memory_order_seq_cst) volatile noexcept;
  
        bool
!       compare_exchange_weak(_Tp&, _Tp, memory_order, memory_order) noexcept;
  
        bool
!       compare_exchange_weak(_Tp&, _Tp, memory_order,
! 			    memory_order) volatile noexcept;
  
        bool
!       compare_exchange_weak(_Tp&, _Tp,
! 			    memory_order = memory_order_seq_cst) noexcept;
  
        bool
!       compare_exchange_weak(_Tp&, _Tp,
! 		       memory_order = memory_order_seq_cst) volatile noexcept;
  
        bool
!       compare_exchange_strong(_Tp&, _Tp, memory_order, memory_order) noexcept;
  
        bool
!       compare_exchange_strong(_Tp&, _Tp, memory_order,
! 			      memory_order) volatile noexcept;
  
        bool
!       compare_exchange_strong(_Tp&, _Tp,
! 			      memory_order = memory_order_seq_cst) noexcept;
  
        bool
!       compare_exchange_strong(_Tp&, _Tp,
! 		       memory_order = memory_order_seq_cst) volatile noexcept;
      };
  
  
--- 165,280 ----
  
        constexpr atomic(_Tp __i) noexcept : _M_i(__i) { }
  
!       operator _Tp() const noexcept
!       { return load(); }
  
!       operator _Tp() const volatile noexcept
!       { return load(); }
  
        _Tp
!       operator=(_Tp __i) noexcept 
!       { store(__i); return __i; }
  
        _Tp
!       operator=(_Tp __i) volatile noexcept 
!       { store(__i); return __i; }
  
        bool
!       is_lock_free() const noexcept
!       { return __atomic_is_lock_free(sizeof(_M_i), &_M_i); }
  
        bool
!       is_lock_free() const volatile noexcept
!       { return __atomic_is_lock_free(sizeof(_M_i), &_M_i); }
  
        void
!       store(_Tp __i, memory_order _m = memory_order_seq_cst) noexcept
!       { __atomic_store(&_M_i, &__i, _m); }
  
        void
!       store(_Tp __i, memory_order _m = memory_order_seq_cst) volatile noexcept
!       { __atomic_store(&_M_i, &__i, _m); }
  
        _Tp
!       load(memory_order _m = memory_order_seq_cst) const noexcept
!       { 
!         _Tp tmp;
! 	__atomic_load(&_M_i, &tmp, _m); 
! 	return tmp;
!       }
  
        _Tp
!       load(memory_order _m = memory_order_seq_cst) const volatile noexcept
!       { 
!         _Tp tmp;
! 	__atomic_load(&_M_i, &tmp, _m); 
! 	return tmp;
!       }
  
        _Tp
!       exchange(_Tp __i, memory_order _m = memory_order_seq_cst) noexcept
!       { 
!         _Tp tmp;
! 	__atomic_exchange(&_M_i, &__i, &tmp, _m); 
! 	return tmp;
!       }
  
        _Tp
!       exchange(_Tp __i, 
! 	       memory_order _m = memory_order_seq_cst) volatile noexcept
!       { 
!         _Tp tmp;
! 	__atomic_exchange(&_M_i, &__i, &tmp, _m); 
! 	return tmp;
!       }
  
        bool
!       compare_exchange_weak(_Tp& __e, _Tp __i, memory_order __s, 
! 			    memory_order __f) noexcept
!       {
! 	return __atomic_compare_exchange(&_M_i, &__e, &__i, true, __s, __f); 
!       }
  
        bool
!       compare_exchange_weak(_Tp& __e, _Tp __i, memory_order __s, 
! 			    memory_order __f) volatile noexcept
!       {
! 	return __atomic_compare_exchange(&_M_i, &__e, &__i, true, __s, __f); 
!       }
  
        bool
!       compare_exchange_weak(_Tp& __e, _Tp __i,
! 			    memory_order __m = memory_order_seq_cst) noexcept
!       { return compare_exchange_weak(__e, __i, __m, __m); }
  
        bool
!       compare_exchange_weak(_Tp& __e, _Tp __i,
! 		     memory_order __m = memory_order_seq_cst) volatile noexcept
!       { return compare_exchange_weak(__e, __i, __m, __m); }
  
        bool
!       compare_exchange_strong(_Tp& __e, _Tp __i, memory_order __s, 
! 			      memory_order __f) noexcept
!       {
! 	return __atomic_compare_exchange(&_M_i, &__e, &__i, false, __s, __f); 
!       }
  
        bool
!       compare_exchange_strong(_Tp& __e, _Tp __i, memory_order __s, 
! 			      memory_order __f) volatile noexcept
!       {
! 	return __atomic_compare_exchange(&_M_i, &__e, &__i, false, __s, __f); 
!       }
  
        bool
!       compare_exchange_strong(_Tp& __e, _Tp __i,
! 			       memory_order __m = memory_order_seq_cst) noexcept
!       { return compare_exchange_strong(__e, __i, __m, __m); }
  
        bool
!       compare_exchange_strong(_Tp& __e, _Tp __i,
! 		     memory_order __m = memory_order_seq_cst) volatile noexcept
!       { return compare_exchange_strong(__e, __i, __m, __m); }
      };
  
  
Index: include/bits/atomic_0.h
===================================================================
*** include/bits/atomic_0.h	(.../trunk/libstdc++-v3)	(revision 180780)
--- include/bits/atomic_0.h	(.../branches/cxx-mem-model/libstdc++-v3)	(revision 180832)
***************
*** 1,677 ****
- // -*- C++ -*- header.
- 
- // Copyright (C) 2008, 2009, 2010, 2011
- // Free Software Foundation, Inc.
- //
- // This file is part of the GNU ISO C++ Library.  This library is free
- // software; you can redistribute it and/or modify it under the
- // terms of the GNU General Public License as published by the
- // Free Software Foundation; either version 3, or (at your option)
- // any later version.
- 
- // This library is distributed in the hope that it will be useful,
- // but WITHOUT ANY WARRANTY; without even the implied warranty of
- // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- // GNU General Public License for more details.
- 
- // Under Section 7 of GPL version 3, you are granted additional
- // permissions described in the GCC Runtime Library Exception, version
- // 3.1, as published by the Free Software Foundation.
- 
- // You should have received a copy of the GNU General Public License and
- // a copy of the GCC Runtime Library Exception along with this program;
- // see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
- // <http://www.gnu.org/licenses/>.
- 
- /** @file bits/atomic_0.h
-  *  This is an internal header file, included by other library headers.
-  *  Do not attempt to use it directly. @headername{atomic}
-  */
- 
- #ifndef _GLIBCXX_ATOMIC_0_H
- #define _GLIBCXX_ATOMIC_0_H 1
- 
- #pragma GCC system_header
- 
- namespace std _GLIBCXX_VISIBILITY(default)
- {
- _GLIBCXX_BEGIN_NAMESPACE_VERSION
- 
- // 0 == __atomic0 == Never lock-free
- namespace __atomic0
- {
-   _GLIBCXX_BEGIN_EXTERN_C
- 
-   void
-   atomic_flag_clear_explicit(__atomic_flag_base*, memory_order)
-   _GLIBCXX_NOTHROW;
- 
-   void
-   __atomic_flag_wait_explicit(__atomic_flag_base*, memory_order)
-   _GLIBCXX_NOTHROW;
- 
-   _GLIBCXX_CONST __atomic_flag_base*
-   __atomic_flag_for_address(const volatile void* __z) _GLIBCXX_NOTHROW;
- 
-   _GLIBCXX_END_EXTERN_C
- 
-   // Implementation specific defines.
- #define _ATOMIC_MEMBER_ _M_i
- 
-   // Implementation specific defines.
- #define _ATOMIC_LOAD_(__a, __x)						   \
-   ({typedef __typeof__(_ATOMIC_MEMBER_) __i_type;                          \
-     __i_type* __p = &_ATOMIC_MEMBER_;	   				   \
-     __atomic_flag_base* __g = __atomic_flag_for_address(__p);	  	   \
-     __atomic_flag_wait_explicit(__g, __x);				   \
-     __i_type __r = *__p;						   \
-     atomic_flag_clear_explicit(__g, __x);		       		   \
-     __r; })
- 
- #define _ATOMIC_STORE_(__a, __n, __x)					   \
-   ({typedef __typeof__(_ATOMIC_MEMBER_) __i_type;                          \
-     __i_type* __p = &_ATOMIC_MEMBER_;	   				   \
-     __typeof__(__n) __w = (__n);			       		   \
-     __atomic_flag_base* __g = __atomic_flag_for_address(__p);	  	   \
-     __atomic_flag_wait_explicit(__g, __x);				   \
-     *__p = __w;								   \
-     atomic_flag_clear_explicit(__g, __x);		       		   \
-     __w; })
- 
- #define _ATOMIC_MODIFY_(__a, __o, __n, __x)				   \
-   ({typedef __typeof__(_ATOMIC_MEMBER_) __i_type;                          \
-     __i_type* __p = &_ATOMIC_MEMBER_;	   				   \
-     __typeof__(__n) __w = (__n);			       		   \
-     __atomic_flag_base* __g = __atomic_flag_for_address(__p);	  	   \
-     __atomic_flag_wait_explicit(__g, __x);				   \
-     __i_type __r = *__p;		       				   \
-     *__p __o __w;					       		   \
-     atomic_flag_clear_explicit(__g, __x);		       		   \
-     __r; })
- 
- #define _ATOMIC_CMPEXCHNG_(__a, __e, __n, __x)				   \
-   ({typedef __typeof__(_ATOMIC_MEMBER_) __i_type;                          \
-     __i_type* __p = &_ATOMIC_MEMBER_;	   				   \
-     __typeof__(__e) __q = (__e);			       		   \
-     __typeof__(__n) __w = (__n);			       		   \
-     bool __r;						       		   \
-     __atomic_flag_base* __g = __atomic_flag_for_address(__p);	   	   \
-     __atomic_flag_wait_explicit(__g, __x);				   \
-     __i_type __t = *__p;		       				   \
-     if (*__q == __t) 							   \
-       {									   \
- 	*__p = (__i_type)__w;						   \
- 	__r = true;							   \
-       }									   \
-     else { *__q = __t; __r = false; }		       			   \
-     atomic_flag_clear_explicit(__g, __x);		       		   \
-     __r; })
- 
- 
-   /// atomic_flag
-   struct atomic_flag : public __atomic_flag_base
-   {
-     atomic_flag() noexcept = default;
-     ~atomic_flag() noexcept = default;
-     atomic_flag(const atomic_flag&) = delete;
-     atomic_flag& operator=(const atomic_flag&) = delete;
-     atomic_flag& operator=(const atomic_flag&) volatile = delete;
- 
-     // Conversion to ATOMIC_FLAG_INIT.
-     atomic_flag(bool __i) noexcept : __atomic_flag_base({ __i }) { }
- 
-     bool
-     test_and_set(memory_order __m = memory_order_seq_cst) noexcept;
- 
-     bool
-     test_and_set(memory_order __m = memory_order_seq_cst) volatile noexcept;
- 
-     void
-     clear(memory_order __m = memory_order_seq_cst) noexcept;
- 
-     void
-     clear(memory_order __m = memory_order_seq_cst) volatile noexcept;
-   };
- 
- 
-   /// Base class for atomic integrals.
-   //
-   // For each of the integral types, define atomic_[integral type] struct
-   //
-   // atomic_bool     bool
-   // atomic_char     char
-   // atomic_schar    signed char
-   // atomic_uchar    unsigned char
-   // atomic_short    short
-   // atomic_ushort   unsigned short
-   // atomic_int      int
-   // atomic_uint     unsigned int
-   // atomic_long     long
-   // atomic_ulong    unsigned long
-   // atomic_llong    long long
-   // atomic_ullong   unsigned long long
-   // atomic_char16_t char16_t
-   // atomic_char32_t char32_t
-   // atomic_wchar_t  wchar_t
- 
-   // Base type.
-   // NB: Assuming _ITp is an integral scalar type that is 1, 2, 4, or 8 bytes,
-   // since that is what GCC built-in functions for atomic memory access work on.
-   template<typename _ITp>
-     struct __atomic_base
-     {
-     private:
-       typedef _ITp 	__int_type;
- 
-       __int_type 	_M_i;
- 
-     public:
-       __atomic_base() noexcept = default;
-       ~__atomic_base() noexcept = default;
-       __atomic_base(const __atomic_base&) = delete;
-       __atomic_base& operator=(const __atomic_base&) = delete;
-       __atomic_base& operator=(const __atomic_base&) volatile = delete;
- 
-       // Requires __int_type convertible to _M_base._M_i.
-       constexpr __atomic_base(__int_type __i) noexcept : _M_i (__i) { }
- 
-       operator __int_type() const noexcept
-       { return load(); }
- 
-       operator __int_type() const volatile noexcept
-       { return load(); }
- 
-       __int_type
-       operator=(__int_type __i) noexcept
-       {
- 	store(__i);
- 	return __i;
-       }
- 
-       __int_type
-       operator=(__int_type __i) volatile noexcept
-       {
- 	store(__i);
- 	return __i;
-       }
- 
-       __int_type
-       operator++(int) noexcept
-       { return fetch_add(1); }
- 
-       __int_type
-       operator++(int) volatile noexcept
-       { return fetch_add(1); }
- 
-       __int_type
-       operator--(int) noexcept
-       { return fetch_sub(1); }
- 
-       __int_type
-       operator--(int) volatile noexcept
-       { return fetch_sub(1); }
- 
-       __int_type
-       operator++() noexcept
-       { return fetch_add(1) + 1; }
- 
-       __int_type
-       operator++() volatile noexcept
-       { return fetch_add(1) + 1; }
- 
-       __int_type
-       operator--() noexcept
-       { return fetch_sub(1) - 1; }
- 
-       __int_type
-       operator--() volatile noexcept
-       { return fetch_sub(1) - 1; }
- 
-       __int_type
-       operator+=(__int_type __i) noexcept
-       { return fetch_add(__i) + __i; }
- 
-       __int_type
-       operator+=(__int_type __i) volatile noexcept
-       { return fetch_add(__i) + __i; }
- 
-       __int_type
-       operator-=(__int_type __i) noexcept
-       { return fetch_sub(__i) - __i; }
- 
-       __int_type
-       operator-=(__int_type __i) volatile noexcept
-       { return fetch_sub(__i) - __i; }
- 
-       __int_type
-       operator&=(__int_type __i) noexcept
-       { return fetch_and(__i) & __i; }
- 
-       __int_type
-       operator&=(__int_type __i) volatile noexcept
-       { return fetch_and(__i) & __i; }
- 
-       __int_type
-       operator|=(__int_type __i) noexcept
-       { return fetch_or(__i) | __i; }
- 
-       __int_type
-       operator|=(__int_type __i) volatile noexcept
-       { return fetch_or(__i) | __i; }
- 
-       __int_type
-       operator^=(__int_type __i) noexcept
-       { return fetch_xor(__i) ^ __i; }
- 
-       __int_type
-       operator^=(__int_type __i) volatile noexcept
-       { return fetch_xor(__i) ^ __i; }
- 
-       bool
-       is_lock_free() const noexcept
-       { return false; }
- 
-       bool
-       is_lock_free() const volatile noexcept
-       { return false; }
- 
-       void
-       store(__int_type __i, memory_order __m = memory_order_seq_cst) noexcept
-       {
- 	__glibcxx_assert(__m != memory_order_acquire);
- 	__glibcxx_assert(__m != memory_order_acq_rel);
- 	__glibcxx_assert(__m != memory_order_consume);
- 	_ATOMIC_STORE_(this, __i, __m);
-       }
- 
-       void
-       store(__int_type __i,
- 	    memory_order __m = memory_order_seq_cst) volatile noexcept
-       {
- 	__glibcxx_assert(__m != memory_order_acquire);
- 	__glibcxx_assert(__m != memory_order_acq_rel);
- 	__glibcxx_assert(__m != memory_order_consume);
- 	_ATOMIC_STORE_(this, __i, __m);
-       }
- 
-       __int_type
-       load(memory_order __m = memory_order_seq_cst) const noexcept
-       {
- 	__glibcxx_assert(__m != memory_order_release);
- 	__glibcxx_assert(__m != memory_order_acq_rel);
- 	return _ATOMIC_LOAD_(this, __m);
-       }
- 
-       __int_type
-       load(memory_order __m = memory_order_seq_cst) const volatile noexcept
-       {
- 	__glibcxx_assert(__m != memory_order_release);
- 	__glibcxx_assert(__m != memory_order_acq_rel);
- 	return _ATOMIC_LOAD_(this, __m);
-       }
- 
-       __int_type
-       exchange(__int_type __i,
- 	       memory_order __m = memory_order_seq_cst) noexcept
-       { return _ATOMIC_MODIFY_(this, =, __i, __m); }
- 
-       __int_type
-       exchange(__int_type __i,
- 	       memory_order __m = memory_order_seq_cst) volatile noexcept
-       { return _ATOMIC_MODIFY_(this, =, __i, __m); }
- 
-       bool
-       compare_exchange_weak(__int_type& __i1, __int_type __i2,
- 			    memory_order __m1, memory_order __m2) noexcept
-       {
- 	__glibcxx_assert(__m2 != memory_order_release);
- 	__glibcxx_assert(__m2 != memory_order_acq_rel);
- 	__glibcxx_assert(__m2 <= __m1);
- 	return _ATOMIC_CMPEXCHNG_(this, &__i1, __i2, __m1);
-       }
- 
-       bool
-       compare_exchange_weak(__int_type& __i1, __int_type __i2,
- 			    memory_order __m1,
- 			    memory_order __m2) volatile noexcept
-       {
- 	__glibcxx_assert(__m2 != memory_order_release);
- 	__glibcxx_assert(__m2 != memory_order_acq_rel);
- 	__glibcxx_assert(__m2 <= __m1);
- 	return _ATOMIC_CMPEXCHNG_(this, &__i1, __i2, __m1);
-       }
- 
-       bool
-       compare_exchange_weak(__int_type& __i1, __int_type __i2,
- 			    memory_order __m = memory_order_seq_cst) noexcept
-       {
- 	return compare_exchange_weak(__i1, __i2, __m,
- 				     __calculate_memory_order(__m));
-       }
- 
-       bool
-       compare_exchange_weak(__int_type& __i1, __int_type __i2,
- 		    memory_order __m = memory_order_seq_cst) volatile noexcept
-       {
- 	return compare_exchange_weak(__i1, __i2, __m,
- 				     __calculate_memory_order(__m));
-       }
- 
-       bool
-       compare_exchange_strong(__int_type& __i1, __int_type __i2,
- 			      memory_order __m1, memory_order __m2) noexcept
-       {
- 	__glibcxx_assert(__m2 != memory_order_release);
- 	__glibcxx_assert(__m2 != memory_order_acq_rel);
- 	__glibcxx_assert(__m2 <= __m1);
- 	return _ATOMIC_CMPEXCHNG_(this, &__i1, __i2, __m1);
-       }
- 
-       bool
-       compare_exchange_strong(__int_type& __i1, __int_type __i2,
- 			      memory_order __m1,
- 			      memory_order __m2) volatile noexcept
-       {
- 	__glibcxx_assert(__m2 != memory_order_release);
- 	__glibcxx_assert(__m2 != memory_order_acq_rel);
- 	__glibcxx_assert(__m2 <= __m1);
- 	return _ATOMIC_CMPEXCHNG_(this, &__i1, __i2, __m1);
-       }
- 
-       bool
-       compare_exchange_strong(__int_type& __i1, __int_type __i2,
- 			      memory_order __m = memory_order_seq_cst) noexcept
-       {
- 	return compare_exchange_strong(__i1, __i2, __m,
- 				       __calculate_memory_order(__m));
-       }
- 
-       bool
-       compare_exchange_strong(__int_type& __i1, __int_type __i2,
- 		    memory_order __m = memory_order_seq_cst) volatile noexcept
-       {
- 	return compare_exchange_strong(__i1, __i2, __m,
- 				       __calculate_memory_order(__m));
-       }
- 
-       __int_type
-       fetch_add(__int_type __i,
- 		memory_order __m = memory_order_seq_cst) noexcept
-       { return _ATOMIC_MODIFY_(this, +=, __i, __m); }
- 
-       __int_type
-       fetch_add(__int_type __i,
- 		memory_order __m = memory_order_seq_cst) volatile noexcept
-       { return _ATOMIC_MODIFY_(this, +=, __i, __m); }
- 
-       __int_type
-       fetch_sub(__int_type __i,
- 		memory_order __m = memory_order_seq_cst) noexcept
-       { return _ATOMIC_MODIFY_(this, -=, __i, __m); }
- 
-       __int_type
-       fetch_sub(__int_type __i,
- 		memory_order __m = memory_order_seq_cst) volatile noexcept
-       { return _ATOMIC_MODIFY_(this, -=, __i, __m); }
- 
-       __int_type
-       fetch_and(__int_type __i,
- 		memory_order __m = memory_order_seq_cst) noexcept
-       { return _ATOMIC_MODIFY_(this, &=, __i, __m); }
- 
-       __int_type
-       fetch_and(__int_type __i,
- 		memory_order __m = memory_order_seq_cst) volatile noexcept
-       { return _ATOMIC_MODIFY_(this, &=, __i, __m); }
- 
-       __int_type
-       fetch_or(__int_type __i,
- 	       memory_order __m = memory_order_seq_cst) noexcept
-       { return _ATOMIC_MODIFY_(this, |=, __i, __m); }
- 
-       __int_type
-       fetch_or(__int_type __i,
- 	       memory_order __m = memory_order_seq_cst) volatile noexcept
-       { return _ATOMIC_MODIFY_(this, |=, __i, __m); }
- 
-       __int_type
-       fetch_xor(__int_type __i,
- 		memory_order __m = memory_order_seq_cst) noexcept
-       { return _ATOMIC_MODIFY_(this, ^=, __i, __m); }
- 
-       __int_type
-       fetch_xor(__int_type __i,
- 		memory_order __m = memory_order_seq_cst) volatile noexcept
-       { return _ATOMIC_MODIFY_(this, ^=, __i, __m); }
-     };
- 
- 
-   /// Partial specialization for pointer types.
-   template<typename _PTp>
-     struct __atomic_base<_PTp*>
-     {
-     private:
-       typedef _PTp* 	__return_pointer_type;
-       typedef void* 	__pointer_type;
-       __pointer_type 	_M_i;
- 
-     public:
-       __atomic_base() noexcept = default;
-       ~__atomic_base() noexcept = default;
-       __atomic_base(const __atomic_base&) = delete;
-       __atomic_base& operator=(const __atomic_base&) = delete;
-       __atomic_base& operator=(const __atomic_base&) volatile = delete;
- 
-       // Requires __pointer_type convertible to _M_i.
-       constexpr __atomic_base(__return_pointer_type __p) noexcept
-       : _M_i (__p) { }
- 
-       operator __return_pointer_type() const noexcept
-       { return reinterpret_cast<__return_pointer_type>(load()); }
- 
-       operator __return_pointer_type() const volatile noexcept
-       { return reinterpret_cast<__return_pointer_type>(load()); }
- 
-       __return_pointer_type
-       operator=(__pointer_type __p) noexcept
-       {
- 	store(__p);
- 	return reinterpret_cast<__return_pointer_type>(__p);
-       }
- 
-       __return_pointer_type
-       operator=(__pointer_type __p) volatile noexcept
-       {
- 	store(__p);
- 	return reinterpret_cast<__return_pointer_type>(__p);
-       }
- 
-       __return_pointer_type
-       operator++(int) noexcept
-       { return reinterpret_cast<__return_pointer_type>(fetch_add(1)); }
- 
-       __return_pointer_type
-       operator++(int) volatile noexcept
-       { return reinterpret_cast<__return_pointer_type>(fetch_add(1)); }
- 
-       __return_pointer_type
-       operator--(int) noexcept
-       { return reinterpret_cast<__return_pointer_type>(fetch_sub(1)); }
- 
-       __return_pointer_type
-       operator--(int) volatile noexcept
-       { return reinterpret_cast<__return_pointer_type>(fetch_sub(1)); }
- 
-       __return_pointer_type
-       operator++() noexcept
-       { return reinterpret_cast<__return_pointer_type>(fetch_add(1) + 1); }
- 
-       __return_pointer_type
-       operator++() volatile noexcept
-       { return reinterpret_cast<__return_pointer_type>(fetch_add(1) + 1); }
- 
-       __return_pointer_type
-       operator--() noexcept
-       { return reinterpret_cast<__return_pointer_type>(fetch_sub(1) - 1); }
- 
-       __return_pointer_type
-       operator--() volatile noexcept
-       { return reinterpret_cast<__return_pointer_type>(fetch_sub(1) - 1); }
- 
-       __return_pointer_type
-       operator+=(ptrdiff_t __d) noexcept
-       { return reinterpret_cast<__return_pointer_type>(fetch_add(__d) + __d); }
- 
-       __return_pointer_type
-       operator+=(ptrdiff_t __d) volatile noexcept
-       { return reinterpret_cast<__return_pointer_type>(fetch_add(__d) + __d); }
- 
-       __return_pointer_type
-       operator-=(ptrdiff_t __d) noexcept
-       { return reinterpret_cast<__return_pointer_type>(fetch_sub(__d) - __d); }
- 
-       __return_pointer_type
-       operator-=(ptrdiff_t __d) volatile noexcept
-       { return reinterpret_cast<__return_pointer_type>(fetch_sub(__d) - __d); }
- 
-       bool
-       is_lock_free() const noexcept
-       { return true; }
- 
-       bool
-       is_lock_free() const volatile noexcept
-       { return true; }
- 
-       void
-       store(__pointer_type __p,
- 	    memory_order __m = memory_order_seq_cst) noexcept
-       {
- 	__glibcxx_assert(__m != memory_order_acquire);
- 	__glibcxx_assert(__m != memory_order_acq_rel);
- 	__glibcxx_assert(__m != memory_order_consume);
- 	_ATOMIC_STORE_(this, __p, __m);
-       }
- 
-       void
-       store(__pointer_type __p,
- 	    memory_order __m = memory_order_seq_cst) volatile noexcept
-       {
- 	__glibcxx_assert(__m != memory_order_acquire);
- 	__glibcxx_assert(__m != memory_order_acq_rel);
- 	__glibcxx_assert(__m != memory_order_consume);
- 	volatile __pointer_type* __p2 = &_M_i;
- 	__typeof__(__p) __w = (__p);
- 	__atomic_flag_base* __g = __atomic_flag_for_address(__p2);
- 	__atomic_flag_wait_explicit(__g, __m);
- 	*__p2 = reinterpret_cast<__pointer_type>(__w);
- 	atomic_flag_clear_explicit(__g, __m);
- 	__w;
-       }
- 
-       __return_pointer_type
-       load(memory_order __m = memory_order_seq_cst) const noexcept
-       {
- 	__glibcxx_assert(__m != memory_order_release);
- 	__glibcxx_assert(__m != memory_order_acq_rel);
- 	void* __v = _ATOMIC_LOAD_(this, __m);
- 	return reinterpret_cast<__return_pointer_type>(__v);
-       }
- 
-       __return_pointer_type
-       load(memory_order __m = memory_order_seq_cst) const volatile noexcept
-       {
- 	__glibcxx_assert(__m != memory_order_release);
- 	__glibcxx_assert(__m != memory_order_acq_rel);
- 	void* __v = _ATOMIC_LOAD_(this, __m);
- 	return reinterpret_cast<__return_pointer_type>(__v);
-       }
- 
-       __return_pointer_type
-       exchange(__pointer_type __p,
- 	       memory_order __m = memory_order_seq_cst) noexcept
-       {
- 	void* __v = _ATOMIC_MODIFY_(this, =, __p, __m);
- 	return reinterpret_cast<__return_pointer_type>(__v);
-       }
- 
-       __return_pointer_type
-       exchange(__pointer_type __p,
- 	       memory_order __m = memory_order_seq_cst) volatile noexcept
-       {
- 	volatile __pointer_type* __p2 = &_M_i;
- 	__typeof__(__p) __w = (__p);
- 	__atomic_flag_base* __g = __atomic_flag_for_address(__p2);
- 	__atomic_flag_wait_explicit(__g, __m);
- 	__pointer_type __r = *__p2;
- 	*__p2 = __w;
- 	atomic_flag_clear_explicit(__g, __m);
- 	__r;
- 	return reinterpret_cast<__return_pointer_type>(_M_i);
-       }
- 
-       bool
-       compare_exchange_strong(__return_pointer_type& __rp1, __pointer_type __p2,
- 			      memory_order __m1, memory_order __m2) noexcept
-       {
- 	__glibcxx_assert(__m2 != memory_order_release);
- 	__glibcxx_assert(__m2 != memory_order_acq_rel);
- 	__glibcxx_assert(__m2 <= __m1);
- 	__pointer_type& __p1 = reinterpret_cast<void*&>(__rp1);
- 	return _ATOMIC_CMPEXCHNG_(this, &__p1, __p2, __m1);
-       }
- 
-       bool
-       compare_exchange_strong(__return_pointer_type& __rp1, __pointer_type __p2,
- 			      memory_order __m1,
- 			      memory_order __m2) volatile noexcept
-       {
- 	__glibcxx_assert(__m2 != memory_order_release);
- 	__glibcxx_assert(__m2 != memory_order_acq_rel);
- 	__glibcxx_assert(__m2 <= __m1);
- 	__pointer_type& __p1 = reinterpret_cast<void*&>(__rp1);
- 	return _ATOMIC_CMPEXCHNG_(this, &__p1, __p2, __m1);
-       }
- 
-       __return_pointer_type
-       fetch_add(ptrdiff_t __d,
- 		memory_order __m = memory_order_seq_cst) noexcept
-       {
- 	void* __v = _ATOMIC_MODIFY_(this, +=, __d, __m);
- 	return reinterpret_cast<__return_pointer_type>(__v);
-       }
- 
-       __return_pointer_type
-       fetch_add(ptrdiff_t __d,
- 		memory_order __m = memory_order_seq_cst) volatile noexcept
-       {
- 	void* __v = _ATOMIC_MODIFY_(this, +=, __d, __m);
- 	return reinterpret_cast<__return_pointer_type>(__v);
-       }
- 
-       __return_pointer_type
-       fetch_sub(ptrdiff_t __d,
- 		memory_order __m = memory_order_seq_cst) noexcept
-       {
- 	void* __v = _ATOMIC_MODIFY_(this, -=, __d, __m);
- 	return reinterpret_cast<__return_pointer_type>(__v);
-       }
- 
-       __return_pointer_type
-       fetch_sub(ptrdiff_t __d,
- 		memory_order __m = memory_order_seq_cst) volatile noexcept
-       {
- 	void* __v = _ATOMIC_MODIFY_(this, -=, __d, __m);
- 	return reinterpret_cast<__return_pointer_type>(__v);
-       }
-     };
- 
- #undef _ATOMIC_LOAD_
- #undef _ATOMIC_STORE_
- #undef _ATOMIC_MODIFY_
- #undef _ATOMIC_CMPEXCHNG_
- } // namespace __atomic0
- 
- _GLIBCXX_END_NAMESPACE_VERSION
- } // namespace std
- 
- #endif
--- 0 ----
Index: include/bits/atomic_2.h
===================================================================
*** include/bits/atomic_2.h	(.../trunk/libstdc++-v3)	(revision 180780)
--- include/bits/atomic_2.h	(.../branches/cxx-mem-model/libstdc++-v3)	(revision 180832)
***************
*** 1,685 ****
- // -*- C++ -*- header.
- 
- // Copyright (C) 2008, 2009, 2010, 2011
- // Free Software Foundation, Inc.
- //
- // This file is part of the GNU ISO C++ Library.  This library is free
- // software; you can redistribute it and/or modify it under the
- // terms of the GNU General Public License as published by the
- // Free Software Foundation; either version 3, or (at your option)
- // any later version.
- 
- // This library is distributed in the hope that it will be useful,
- // but WITHOUT ANY WARRANTY; without even the implied warranty of
- // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- // GNU General Public License for more details.
- 
- // Under Section 7 of GPL version 3, you are granted additional
- // permissions described in the GCC Runtime Library Exception, version
- // 3.1, as published by the Free Software Foundation.
- 
- // You should have received a copy of the GNU General Public License and
- // a copy of the GCC Runtime Library Exception along with this program;
- // see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
- // <http://www.gnu.org/licenses/>.
- 
- /** @file bits/atomic_2.h
-  *  This is an internal header file, included by other library headers.
-  *  Do not attempt to use it directly. @headername{atomic}
-  */
- 
- #ifndef _GLIBCXX_ATOMIC_2_H
- #define _GLIBCXX_ATOMIC_2_H 1
- 
- #pragma GCC system_header
- 
- namespace std _GLIBCXX_VISIBILITY(default)
- {
- _GLIBCXX_BEGIN_NAMESPACE_VERSION
- 
- // 2 == __atomic2 == Always lock-free
- // Assumed:
- // _GLIBCXX_ATOMIC_BUILTINS_1
- // _GLIBCXX_ATOMIC_BUILTINS_2
- // _GLIBCXX_ATOMIC_BUILTINS_4
- // _GLIBCXX_ATOMIC_BUILTINS_8
- namespace __atomic2
- {
-   /// atomic_flag
-   struct atomic_flag : public __atomic_flag_base
-   {
-     atomic_flag() noexcept = default;
-     ~atomic_flag() noexcept = default;
-     atomic_flag(const atomic_flag&) = delete;
-     atomic_flag& operator=(const atomic_flag&) = delete;
-     atomic_flag& operator=(const atomic_flag&) volatile = delete;
- 
-     // Conversion to ATOMIC_FLAG_INIT.
-     atomic_flag(bool __i) noexcept : __atomic_flag_base({ __i }) { }
- 
-     bool
-     test_and_set(memory_order __m = memory_order_seq_cst) noexcept
-     {
-       // Redundant synchronize if built-in for lock is a full barrier.
-       if (__m != memory_order_acquire && __m != memory_order_acq_rel)
- 	__sync_synchronize();
-       return __sync_lock_test_and_set(&_M_i, 1);
-     }
- 
-     bool
-     test_and_set(memory_order __m = memory_order_seq_cst) volatile noexcept
-     {
-       // Redundant synchronize if built-in for lock is a full barrier.
-       if (__m != memory_order_acquire && __m != memory_order_acq_rel)
- 	__sync_synchronize();
-       return __sync_lock_test_and_set(&_M_i, 1);
-     }
- 
-     void
-     clear(memory_order __m = memory_order_seq_cst) noexcept
-     {
-       __glibcxx_assert(__m != memory_order_consume);
-       __glibcxx_assert(__m != memory_order_acquire);
-       __glibcxx_assert(__m != memory_order_acq_rel);
- 
-       __sync_lock_release(&_M_i);
-       if (__m != memory_order_acquire && __m != memory_order_acq_rel)
- 	__sync_synchronize();
-     }
- 
-     void
-     clear(memory_order __m = memory_order_seq_cst) volatile noexcept
-     {
-       __glibcxx_assert(__m != memory_order_consume);
-       __glibcxx_assert(__m != memory_order_acquire);
-       __glibcxx_assert(__m != memory_order_acq_rel);
- 
-       __sync_lock_release(&_M_i);
-       if (__m != memory_order_acquire && __m != memory_order_acq_rel)
- 	__sync_synchronize();
-     }
-   };
- 
- 
-   /// Base class for atomic integrals.
-   //
-   // For each of the integral types, define atomic_[integral type] struct
-   //
-   // atomic_bool     bool
-   // atomic_char     char
-   // atomic_schar    signed char
-   // atomic_uchar    unsigned char
-   // atomic_short    short
-   // atomic_ushort   unsigned short
-   // atomic_int      int
-   // atomic_uint     unsigned int
-   // atomic_long     long
-   // atomic_ulong    unsigned long
-   // atomic_llong    long long
-   // atomic_ullong   unsigned long long
-   // atomic_char16_t char16_t
-   // atomic_char32_t char32_t
-   // atomic_wchar_t  wchar_t
-   //
-   // NB: Assuming _ITp is an integral scalar type that is 1, 2, 4, or
-   // 8 bytes, since that is what GCC built-in functions for atomic
-   // memory access expect.
-   template<typename _ITp>
-     struct __atomic_base
-     {
-     private:
-       typedef _ITp 	__int_type;
- 
-       __int_type 	_M_i;
- 
-     public:
-       __atomic_base() noexcept = default;
-       ~__atomic_base() noexcept = default;
-       __atomic_base(const __atomic_base&) = delete;
-       __atomic_base& operator=(const __atomic_base&) = delete;
-       __atomic_base& operator=(const __atomic_base&) volatile = delete;
- 
-       // Requires __int_type convertible to _M_i.
-       constexpr __atomic_base(__int_type __i) noexcept : _M_i (__i) { }
- 
-       operator __int_type() const noexcept
-       { return load(); }
- 
-       operator __int_type() const volatile noexcept
-       { return load(); }
- 
-       __int_type
-       operator=(__int_type __i) noexcept
-       {
- 	store(__i);
- 	return __i;
-       }
- 
-       __int_type
-       operator=(__int_type __i) volatile noexcept
-       {
- 	store(__i);
- 	return __i;
-       }
- 
-       __int_type
-       operator++(int) noexcept
-       { return fetch_add(1); }
- 
-       __int_type
-       operator++(int) volatile noexcept
-       { return fetch_add(1); }
- 
-       __int_type
-       operator--(int) noexcept
-       { return fetch_sub(1); }
- 
-       __int_type
-       operator--(int) volatile noexcept
-       { return fetch_sub(1); }
- 
-       __int_type
-       operator++() noexcept
-       { return __sync_add_and_fetch(&_M_i, 1); }
- 
-       __int_type
-       operator++() volatile noexcept
-       { return __sync_add_and_fetch(&_M_i, 1); }
- 
-       __int_type
-       operator--() noexcept
-       { return __sync_sub_and_fetch(&_M_i, 1); }
- 
-       __int_type
-       operator--() volatile noexcept
-       { return __sync_sub_and_fetch(&_M_i, 1); }
- 
-       __int_type
-       operator+=(__int_type __i) noexcept
-       { return __sync_add_and_fetch(&_M_i, __i); }
- 
-       __int_type
-       operator+=(__int_type __i) volatile noexcept
-       { return __sync_add_and_fetch(&_M_i, __i); }
- 
-       __int_type
-       operator-=(__int_type __i) noexcept
-       { return __sync_sub_and_fetch(&_M_i, __i); }
- 
-       __int_type
-       operator-=(__int_type __i) volatile noexcept
-       { return __sync_sub_and_fetch(&_M_i, __i); }
- 
-       __int_type
-       operator&=(__int_type __i) noexcept
-       { return __sync_and_and_fetch(&_M_i, __i); }
- 
-       __int_type
-       operator&=(__int_type __i) volatile noexcept
-       { return __sync_and_and_fetch(&_M_i, __i); }
- 
-       __int_type
-       operator|=(__int_type __i) noexcept
-       { return __sync_or_and_fetch(&_M_i, __i); }
- 
-       __int_type
-       operator|=(__int_type __i) volatile noexcept
-       { return __sync_or_and_fetch(&_M_i, __i); }
- 
-       __int_type
-       operator^=(__int_type __i) noexcept
-       { return __sync_xor_and_fetch(&_M_i, __i); }
- 
-       __int_type
-       operator^=(__int_type __i) volatile noexcept
-       { return __sync_xor_and_fetch(&_M_i, __i); }
- 
-       bool
-       is_lock_free() const noexcept
-       { return true; }
- 
-       bool
-       is_lock_free() const volatile noexcept
-       { return true; }
- 
-       void
-       store(__int_type __i, memory_order __m = memory_order_seq_cst) noexcept
-       {
- 	__glibcxx_assert(__m != memory_order_acquire);
- 	__glibcxx_assert(__m != memory_order_acq_rel);
- 	__glibcxx_assert(__m != memory_order_consume);
- 
- 	if (__m == memory_order_relaxed)
- 	  _M_i = __i;
- 	else
- 	  {
- 	    // write_mem_barrier();
- 	    _M_i = __i;
- 	    if (__m == memory_order_seq_cst)
- 	      __sync_synchronize();
- 	  }
-       }
- 
-       void
-       store(__int_type __i,
- 	    memory_order __m = memory_order_seq_cst) volatile noexcept
-       {
- 	__glibcxx_assert(__m != memory_order_acquire);
- 	__glibcxx_assert(__m != memory_order_acq_rel);
- 	__glibcxx_assert(__m != memory_order_consume);
- 
- 	if (__m == memory_order_relaxed)
- 	  _M_i = __i;
- 	else
- 	  {
- 	    // write_mem_barrier();
- 	    _M_i = __i;
- 	    if (__m == memory_order_seq_cst)
- 	      __sync_synchronize();
- 	  }
-       }
- 
-       __int_type
-       load(memory_order __m = memory_order_seq_cst) const noexcept
-       {
- 	__glibcxx_assert(__m != memory_order_release);
- 	__glibcxx_assert(__m != memory_order_acq_rel);
- 
- 	__sync_synchronize();
- 	__int_type __ret = _M_i;
- 	__sync_synchronize();
- 	return __ret;
-       }
- 
-       __int_type
-       load(memory_order __m = memory_order_seq_cst) const volatile noexcept
-       {
- 	__glibcxx_assert(__m != memory_order_release);
- 	__glibcxx_assert(__m != memory_order_acq_rel);
- 
- 	__sync_synchronize();
- 	__int_type __ret = _M_i;
- 	__sync_synchronize();
- 	return __ret;
-       }
- 
-       __int_type
-       exchange(__int_type __i,
- 	       memory_order __m = memory_order_seq_cst) noexcept
-       {
- 	// XXX built-in assumes memory_order_acquire.
- 	return __sync_lock_test_and_set(&_M_i, __i);
-       }
- 
- 
-       __int_type
-       exchange(__int_type __i,
- 	       memory_order __m = memory_order_seq_cst) volatile noexcept
-       {
- 	// XXX built-in assumes memory_order_acquire.
- 	return __sync_lock_test_and_set(&_M_i, __i);
-       }
- 
-       bool
-       compare_exchange_weak(__int_type& __i1, __int_type __i2,
- 			    memory_order __m1, memory_order __m2) noexcept
-       { return compare_exchange_strong(__i1, __i2, __m1, __m2); }
- 
-       bool
-       compare_exchange_weak(__int_type& __i1, __int_type __i2,
- 			    memory_order __m1,
- 			    memory_order __m2) volatile noexcept
-       { return compare_exchange_strong(__i1, __i2, __m1, __m2); }
- 
-       bool
-       compare_exchange_weak(__int_type& __i1, __int_type __i2,
- 			    memory_order __m = memory_order_seq_cst) noexcept
-       {
- 	return compare_exchange_weak(__i1, __i2, __m,
- 				     __calculate_memory_order(__m));
-       }
- 
-       bool
-       compare_exchange_weak(__int_type& __i1, __int_type __i2,
- 		   memory_order __m = memory_order_seq_cst) volatile noexcept
-       {
- 	return compare_exchange_weak(__i1, __i2, __m,
- 				     __calculate_memory_order(__m));
-       }
- 
-       bool
-       compare_exchange_strong(__int_type& __i1, __int_type __i2,
- 			      memory_order __m1, memory_order __m2) noexcept
-       {
- 	__glibcxx_assert(__m2 != memory_order_release);
- 	__glibcxx_assert(__m2 != memory_order_acq_rel);
- 	__glibcxx_assert(__m2 <= __m1);
- 
- 	__int_type __i1o = __i1;
- 	__int_type __i1n = __sync_val_compare_and_swap(&_M_i, __i1o, __i2);
- 
- 	// Assume extra stores (of same value) allowed in true case.
- 	__i1 = __i1n;
- 	return __i1o == __i1n;
-       }
- 
-       bool
-       compare_exchange_strong(__int_type& __i1, __int_type __i2,
- 			      memory_order __m1,
- 			      memory_order __m2) volatile noexcept
-       {
- 	__glibcxx_assert(__m2 != memory_order_release);
- 	__glibcxx_assert(__m2 != memory_order_acq_rel);
- 	__glibcxx_assert(__m2 <= __m1);
- 
- 	__int_type __i1o = __i1;
- 	__int_type __i1n = __sync_val_compare_and_swap(&_M_i, __i1o, __i2);
- 
- 	// Assume extra stores (of same value) allowed in true case.
- 	__i1 = __i1n;
- 	return __i1o == __i1n;
-       }
- 
-       bool
-       compare_exchange_strong(__int_type& __i1, __int_type __i2,
- 			      memory_order __m = memory_order_seq_cst) noexcept
-       {
- 	return compare_exchange_strong(__i1, __i2, __m,
- 				       __calculate_memory_order(__m));
-       }
- 
-       bool
-       compare_exchange_strong(__int_type& __i1, __int_type __i2,
- 		 memory_order __m = memory_order_seq_cst) volatile noexcept
-       {
- 	return compare_exchange_strong(__i1, __i2, __m,
- 				       __calculate_memory_order(__m));
-       }
- 
-       __int_type
-       fetch_add(__int_type __i,
- 		memory_order __m = memory_order_seq_cst) noexcept
-       { return __sync_fetch_and_add(&_M_i, __i); }
- 
-       __int_type
-       fetch_add(__int_type __i,
- 		memory_order __m = memory_order_seq_cst) volatile noexcept
-       { return __sync_fetch_and_add(&_M_i, __i); }
- 
-       __int_type
-       fetch_sub(__int_type __i,
- 		memory_order __m = memory_order_seq_cst) noexcept
-       { return __sync_fetch_and_sub(&_M_i, __i); }
- 
-       __int_type
-       fetch_sub(__int_type __i,
- 		memory_order __m = memory_order_seq_cst) volatile noexcept
-       { return __sync_fetch_and_sub(&_M_i, __i); }
- 
-       __int_type
-       fetch_and(__int_type __i,
- 		memory_order __m = memory_order_seq_cst) noexcept
-       { return __sync_fetch_and_and(&_M_i, __i); }
- 
-       __int_type
-       fetch_and(__int_type __i,
- 		memory_order __m = memory_order_seq_cst) volatile noexcept
-       { return __sync_fetch_and_and(&_M_i, __i); }
- 
-       __int_type
-       fetch_or(__int_type __i,
- 	       memory_order __m = memory_order_seq_cst) noexcept
-       { return __sync_fetch_and_or(&_M_i, __i); }
- 
-       __int_type
-       fetch_or(__int_type __i,
- 	       memory_order __m = memory_order_seq_cst) volatile noexcept
-       { return __sync_fetch_and_or(&_M_i, __i); }
- 
-       __int_type
-       fetch_xor(__int_type __i,
- 		memory_order __m = memory_order_seq_cst) noexcept
-       { return __sync_fetch_and_xor(&_M_i, __i); }
- 
-       __int_type
-       fetch_xor(__int_type __i,
- 		memory_order __m = memory_order_seq_cst) volatile noexcept
-       { return __sync_fetch_and_xor(&_M_i, __i); }
-     };
- 
- 
-   /// Partial specialization for pointer types.
-   template<typename _PTp>
-     struct __atomic_base<_PTp*>
-     {
-     private:
-       typedef _PTp* 	__pointer_type;
- 
-       __pointer_type 	_M_p;
- 
-     public:
-       __atomic_base() noexcept = default;
-       ~__atomic_base() noexcept = default;
-       __atomic_base(const __atomic_base&) = delete;
-       __atomic_base& operator=(const __atomic_base&) = delete;
-       __atomic_base& operator=(const __atomic_base&) volatile = delete;
- 
-       // Requires __pointer_type convertible to _M_p.
-       constexpr __atomic_base(__pointer_type __p) noexcept : _M_p (__p) { }
- 
-       operator __pointer_type() const noexcept
-       { return load(); }
- 
-       operator __pointer_type() const volatile noexcept
-       { return load(); }
- 
-       __pointer_type
-       operator=(__pointer_type __p) noexcept
-       {
- 	store(__p);
- 	return __p;
-       }
- 
-       __pointer_type
-       operator=(__pointer_type __p) volatile noexcept
-       {
- 	store(__p);
- 	return __p;
-       }
- 
-       __pointer_type
-       operator++(int) noexcept
-       { return fetch_add(1); }
- 
-       __pointer_type
-       operator++(int) volatile noexcept
-       { return fetch_add(1); }
- 
-       __pointer_type
-       operator--(int) noexcept
-       { return fetch_sub(1); }
- 
-       __pointer_type
-       operator--(int) volatile noexcept
-       { return fetch_sub(1); }
- 
-       __pointer_type
-       operator++() noexcept
-       { return fetch_add(1) + 1; }
- 
-       __pointer_type
-       operator++() volatile noexcept
-       { return fetch_add(1) + 1; }
- 
-       __pointer_type
-       operator--() noexcept
-       { return fetch_sub(1) -1; }
- 
-       __pointer_type
-       operator--() volatile noexcept
-       { return fetch_sub(1) -1; }
- 
-       __pointer_type
-       operator+=(ptrdiff_t __d) noexcept
-       { return fetch_add(__d) + __d; }
- 
-       __pointer_type
-       operator+=(ptrdiff_t __d) volatile noexcept
-       { return fetch_add(__d) + __d; }
- 
-       __pointer_type
-       operator-=(ptrdiff_t __d) noexcept
-       { return fetch_sub(__d) - __d; }
- 
-       __pointer_type
-       operator-=(ptrdiff_t __d) volatile noexcept
-       { return fetch_sub(__d) - __d; }
- 
-       bool
-       is_lock_free() const noexcept
-       { return true; }
- 
-       bool
-       is_lock_free() const volatile noexcept
-       { return true; }
- 
-       void
-       store(__pointer_type __p,
- 	    memory_order __m = memory_order_seq_cst) noexcept
-       {
- 	__glibcxx_assert(__m != memory_order_acquire);
- 	__glibcxx_assert(__m != memory_order_acq_rel);
- 	__glibcxx_assert(__m != memory_order_consume);
- 
- 	if (__m == memory_order_relaxed)
- 	  _M_p = __p;
- 	else
- 	  {
- 	    // write_mem_barrier();
- 	    _M_p = __p;
- 	    if (__m == memory_order_seq_cst)
- 	      __sync_synchronize();
- 	  }
-       }
- 
-       void
-       store(__pointer_type __p,
- 	    memory_order __m = memory_order_seq_cst) volatile noexcept
-       {
- 	__glibcxx_assert(__m != memory_order_acquire);
- 	__glibcxx_assert(__m != memory_order_acq_rel);
- 	__glibcxx_assert(__m != memory_order_consume);
- 
- 	if (__m == memory_order_relaxed)
- 	  _M_p = __p;
- 	else
- 	  {
- 	    // write_mem_barrier();
- 	    _M_p = __p;
- 	    if (__m == memory_order_seq_cst)
- 	      __sync_synchronize();
- 	  }
-       }
- 
-       __pointer_type
-       load(memory_order __m = memory_order_seq_cst) const noexcept
-       {
- 	__glibcxx_assert(__m != memory_order_release);
- 	__glibcxx_assert(__m != memory_order_acq_rel);
- 
- 	__sync_synchronize();
- 	__pointer_type __ret = _M_p;
- 	__sync_synchronize();
- 	return __ret;
-       }
- 
-       __pointer_type
-       load(memory_order __m = memory_order_seq_cst) const volatile noexcept
-       {
- 	__glibcxx_assert(__m != memory_order_release);
- 	__glibcxx_assert(__m != memory_order_acq_rel);
- 
- 	__sync_synchronize();
- 	__pointer_type __ret = _M_p;
- 	__sync_synchronize();
- 	return __ret;
-       }
- 
-       __pointer_type
-       exchange(__pointer_type __p,
- 	       memory_order __m = memory_order_seq_cst) noexcept
-       {
- 	// XXX built-in assumes memory_order_acquire.
- 	return __sync_lock_test_and_set(&_M_p, __p);
-       }
- 
- 
-       __pointer_type
-       exchange(__pointer_type __p,
- 	       memory_order __m = memory_order_seq_cst) volatile noexcept
-       {
- 	// XXX built-in assumes memory_order_acquire.
- 	return __sync_lock_test_and_set(&_M_p, __p);
-       }
- 
-       bool
-       compare_exchange_strong(__pointer_type& __p1, __pointer_type __p2,
- 			      memory_order __m1,
- 			      memory_order __m2) noexcept
-       {
- 	__glibcxx_assert(__m2 != memory_order_release);
- 	__glibcxx_assert(__m2 != memory_order_acq_rel);
- 	__glibcxx_assert(__m2 <= __m1);
- 
- 	__pointer_type __p1o = __p1;
- 	__pointer_type __p1n = __sync_val_compare_and_swap(&_M_p, __p1o, __p2);
- 
- 	// Assume extra stores (of same value) allowed in true case.
- 	__p1 = __p1n;
- 	return __p1o == __p1n;
-       }
- 
-       bool
-       compare_exchange_strong(__pointer_type& __p1, __pointer_type __p2,
- 			      memory_order __m1,
- 			      memory_order __m2) volatile noexcept
-       {
- 	__glibcxx_assert(__m2 != memory_order_release);
- 	__glibcxx_assert(__m2 != memory_order_acq_rel);
- 	__glibcxx_assert(__m2 <= __m1);
- 
- 	__pointer_type __p1o = __p1;
- 	__pointer_type __p1n = __sync_val_compare_and_swap(&_M_p, __p1o, __p2);
- 
- 	// Assume extra stores (of same value) allowed in true case.
- 	__p1 = __p1n;
- 	return __p1o == __p1n;
-       }
- 
-       __pointer_type
-       fetch_add(ptrdiff_t __d,
- 		memory_order __m = memory_order_seq_cst) noexcept
-       { return __sync_fetch_and_add(&_M_p, __d); }
- 
-       __pointer_type
-       fetch_add(ptrdiff_t __d,
- 		memory_order __m = memory_order_seq_cst) volatile noexcept
-       { return __sync_fetch_and_add(&_M_p, __d); }
- 
-       __pointer_type
-       fetch_sub(ptrdiff_t __d,
- 		memory_order __m = memory_order_seq_cst) noexcept
-       { return __sync_fetch_and_sub(&_M_p, __d); }
- 
-       __pointer_type
-       fetch_sub(ptrdiff_t __d,
- 		memory_order __m = memory_order_seq_cst) volatile noexcept
-       { return __sync_fetch_and_sub(&_M_p, __d); }
-     };
- 
- } // namespace __atomic2
- 
- _GLIBCXX_END_NAMESPACE_VERSION
- } // namespace std
- 
- #endif
--- 0 ----
Index: include/bits/atomic_base.h
===================================================================
*** include/bits/atomic_base.h	(.../trunk/libstdc++-v3)	(revision 180780)
--- include/bits/atomic_base.h	(.../branches/cxx-mem-model/libstdc++-v3)	(revision 180832)
*************** _GLIBCXX_BEGIN_NAMESPACE_VERSION
*** 83,168 ****
        return __ret;
      }
  
!   /**
!    *  @brief Base type for atomic_flag.
!    *
!    *  Base type is POD with data, allowing atomic_flag to derive from
!    *  it and meet the standard layout type requirement. In addition to
!    *  compatibilty with a C interface, this allows different
!    *  implementations of atomic_flag to use the same atomic operation
!    *  functions, via a standard conversion to the __atomic_flag_base
!    *  argument.
!   */
!   _GLIBCXX_BEGIN_EXTERN_C
! 
!   struct __atomic_flag_base
!   {
!     bool _M_i;
!   };
  
!   _GLIBCXX_END_EXTERN_C
  
! #define ATOMIC_FLAG_INIT { false }
  
  
    // Base types for atomics.
!   //
!   // Three nested namespaces for atomic implementation details.
!   //
!   // The nested namespace inlined into std:: is determined by the value
!   // of the _GLIBCXX_ATOMIC_PROPERTY macro and the resulting
!   // ATOMIC_*_LOCK_FREE macros.
!   //
!   // 0 == __atomic0 == Never lock-free
!   // 1 == __atomic1 == Best available, sometimes lock-free
!   // 2 == __atomic2 == Always lock-free
! 
!   namespace __atomic0
!   {
!     struct atomic_flag;
! 
!     template<typename _IntTp>
!       struct __atomic_base;
!   }
! 
!   namespace __atomic2
!   {
!     struct atomic_flag;
! 
!     template<typename _IntTp>
!       struct __atomic_base;
!   }
! 
!   namespace __atomic1
!   {
!     using __atomic2::atomic_flag;
!     using __atomic0::__atomic_base;
!   }
! 
!   /// Lock-free Property
! #if defined(_GLIBCXX_ATOMIC_BUILTINS_1) && defined(_GLIBCXX_ATOMIC_BUILTINS_2) \
!   && defined(_GLIBCXX_ATOMIC_BUILTINS_4) && defined(_GLIBCXX_ATOMIC_BUILTINS_8)
! # define _GLIBCXX_ATOMIC_PROPERTY 2
! # define _GLIBCXX_ATOMIC_NAMESPACE __atomic2
! #elif defined(_GLIBCXX_ATOMIC_BUILTINS_1)
! # define _GLIBCXX_ATOMIC_PROPERTY 1
! # define _GLIBCXX_ATOMIC_NAMESPACE __atomic1
! #else
! # define _GLIBCXX_ATOMIC_PROPERTY 0
! # define _GLIBCXX_ATOMIC_NAMESPACE __atomic0
! #endif
! 
! #define ATOMIC_CHAR_LOCK_FREE _GLIBCXX_ATOMIC_PROPERTY
! #define ATOMIC_CHAR16_T_LOCK_FREE _GLIBCXX_ATOMIC_PROPERTY
! #define ATOMIC_CHAR32_T_LOCK_FREE _GLIBCXX_ATOMIC_PROPERTY
! #define ATOMIC_WCHAR_T_LOCK_FREE _GLIBCXX_ATOMIC_PROPERTY
! #define ATOMIC_SHORT_LOCK_FREE _GLIBCXX_ATOMIC_PROPERTY
! #define ATOMIC_INT_LOCK_FREE _GLIBCXX_ATOMIC_PROPERTY
! #define ATOMIC_LONG_LOCK_FREE _GLIBCXX_ATOMIC_PROPERTY
! #define ATOMIC_LLONG_LOCK_FREE _GLIBCXX_ATOMIC_PROPERTY
! 
!   inline namespace _GLIBCXX_ATOMIC_NAMESPACE { }
! 
  
    /// atomic_char
    typedef __atomic_base<char>  	       		atomic_char;
--- 83,105 ----
        return __ret;
      }
  
!   /// Lock-free Property
  
! #define LOCKFREE_PROP(T) (__atomic_always_lock_free (sizeof (T), 0) ? 2 : 1)
  
! #define ATOMIC_CHAR_LOCK_FREE 		LOCKFREE_PROP (char)
! #define ATOMIC_CHAR16_T_LOCK_FREE	LOCKFREE_PROP (char16_t)
! #define ATOMIC_CHAR32_T_LOCK_FREE	LOCKFREE_PROP (char32_t)
! #define ATOMIC_WCHAR_T_LOCK_FREE	LOCKFREE_PROP (wchar_t)
! #define ATOMIC_SHORT_LOCK_FREE		LOCKFREE_PROP (short)
! #define ATOMIC_INT_LOCK_FREE		LOCKFREE_PROP (int)
! #define ATOMIC_LONG_LOCK_FREE		LOCKFREE_PROP (long)
! #define ATOMIC_LLONG_LOCK_FREE		LOCKFREE_PROP (long long)
  
  
    // Base types for atomics.
!   template<typename _IntTp>
!     struct __atomic_base;
  
    /// atomic_char
    typedef __atomic_base<char>  	       		atomic_char;
*************** _GLIBCXX_BEGIN_NAMESPACE_VERSION
*** 287,292 ****
--- 224,817 ----
    template<typename _Tp>
      struct atomic<_Tp*>;
  
+ 
+   /**
+    *  @brief Base type for atomic_flag.
+    *
+    *  Base type is POD with data, allowing atomic_flag to derive from
+    *  it and meet the standard layout type requirement. In addition to
+    *  compatibilty with a C interface, this allows different
+    *  implementations of atomic_flag to use the same atomic operation
+    *  functions, via a standard conversion to the __atomic_flag_base
+    *  argument.
+   */
+   _GLIBCXX_BEGIN_EXTERN_C
+ 
+   struct __atomic_flag_base
+   {
+     bool _M_i;
+   };
+ 
+   _GLIBCXX_END_EXTERN_C
+ 
+ #define ATOMIC_FLAG_INIT { false }
+ 
+   /// atomic_flag
+   struct atomic_flag : public __atomic_flag_base
+   {
+     atomic_flag() noexcept = default;
+     ~atomic_flag() noexcept = default;
+     atomic_flag(const atomic_flag&) = delete;
+     atomic_flag& operator=(const atomic_flag&) = delete;
+     atomic_flag& operator=(const atomic_flag&) volatile = delete;
+ 
+     // Conversion to ATOMIC_FLAG_INIT.
+     atomic_flag(bool __i) noexcept : __atomic_flag_base({ __i }) { }
+ 
+     bool
+     test_and_set(memory_order __m = memory_order_seq_cst) noexcept
+     {
+       return __atomic_exchange_n(&_M_i, 1, __m);
+     }
+ 
+     bool
+     test_and_set(memory_order __m = memory_order_seq_cst) volatile noexcept
+     {
+       return __atomic_exchange_n(&_M_i, 1, __m);
+     }
+ 
+     void
+     clear(memory_order __m = memory_order_seq_cst) noexcept
+     {
+       __glibcxx_assert(__m != memory_order_consume);
+       __glibcxx_assert(__m != memory_order_acquire);
+       __glibcxx_assert(__m != memory_order_acq_rel);
+ 
+       __atomic_store_n(&_M_i, 0, __m);
+     }
+ 
+     void
+     clear(memory_order __m = memory_order_seq_cst) volatile noexcept
+     {
+       __glibcxx_assert(__m != memory_order_consume);
+       __glibcxx_assert(__m != memory_order_acquire);
+       __glibcxx_assert(__m != memory_order_acq_rel);
+ 
+       __atomic_store_n(&_M_i, 0, __m);
+     }
+   };
+ 
+ 
+   /// Base class for atomic integrals.
+   //
+   // For each of the integral types, define atomic_[integral type] struct
+   //
+   // atomic_bool     bool
+   // atomic_char     char
+   // atomic_schar    signed char
+   // atomic_uchar    unsigned char
+   // atomic_short    short
+   // atomic_ushort   unsigned short
+   // atomic_int      int
+   // atomic_uint     unsigned int
+   // atomic_long     long
+   // atomic_ulong    unsigned long
+   // atomic_llong    long long
+   // atomic_ullong   unsigned long long
+   // atomic_char16_t char16_t
+   // atomic_char32_t char32_t
+   // atomic_wchar_t  wchar_t
+   //
+   // NB: Assuming _ITp is an integral scalar type that is 1, 2, 4, or
+   // 8 bytes, since that is what GCC built-in functions for atomic
+   // memory access expect.
+   template<typename _ITp>
+     struct __atomic_base
+     {
+     private:
+       typedef _ITp 	__int_type;
+ 
+       __int_type 	_M_i;
+ 
+     public:
+       __atomic_base() noexcept = default;
+       ~__atomic_base() noexcept = default;
+       __atomic_base(const __atomic_base&) = delete;
+       __atomic_base& operator=(const __atomic_base&) = delete;
+       __atomic_base& operator=(const __atomic_base&) volatile = delete;
+ 
+       // Requires __int_type convertible to _M_i.
+       constexpr __atomic_base(__int_type __i) noexcept : _M_i (__i) { }
+ 
+       operator __int_type() const noexcept
+       { return load(); }
+ 
+       operator __int_type() const volatile noexcept
+       { return load(); }
+ 
+       __int_type
+       operator=(__int_type __i) noexcept
+       {
+ 	store(__i);
+ 	return __i;
+       }
+ 
+       __int_type
+       operator=(__int_type __i) volatile noexcept
+       {
+ 	store(__i);
+ 	return __i;
+       }
+ 
+       __int_type
+       operator++(int) noexcept
+       { return fetch_add(1); }
+ 
+       __int_type
+       operator++(int) volatile noexcept
+       { return fetch_add(1); }
+ 
+       __int_type
+       operator--(int) noexcept
+       { return fetch_sub(1); }
+ 
+       __int_type
+       operator--(int) volatile noexcept
+       { return fetch_sub(1); }
+ 
+       __int_type
+       operator++() noexcept
+       { return __atomic_add_fetch(&_M_i, 1, memory_order_seq_cst); }
+ 
+       __int_type
+       operator++() volatile noexcept
+       { return __atomic_add_fetch(&_M_i, 1, memory_order_seq_cst); }
+ 
+       __int_type
+       operator--() noexcept
+       { return __atomic_sub_fetch(&_M_i, 1, memory_order_seq_cst); }
+ 
+       __int_type
+       operator--() volatile noexcept
+       { return __atomic_sub_fetch(&_M_i, 1, memory_order_seq_cst); }
+ 
+       __int_type
+       operator+=(__int_type __i) noexcept
+       { return __atomic_add_fetch(&_M_i, __i, memory_order_seq_cst); }
+ 
+       __int_type
+       operator+=(__int_type __i) volatile noexcept
+       { return __atomic_add_fetch(&_M_i, __i, memory_order_seq_cst); }
+ 
+       __int_type
+       operator-=(__int_type __i) noexcept
+       { return __atomic_sub_fetch(&_M_i, __i, memory_order_seq_cst); }
+ 
+       __int_type
+       operator-=(__int_type __i) volatile noexcept
+       { return __atomic_sub_fetch(&_M_i, __i, memory_order_seq_cst); }
+ 
+       __int_type
+       operator&=(__int_type __i) noexcept
+       { return __atomic_and_fetch(&_M_i, __i, memory_order_seq_cst); }
+ 
+       __int_type
+       operator&=(__int_type __i) volatile noexcept
+       { return __atomic_and_fetch(&_M_i, __i, memory_order_seq_cst); }
+ 
+       __int_type
+       operator|=(__int_type __i) noexcept
+       { return __atomic_or_fetch(&_M_i, __i, memory_order_seq_cst); }
+ 
+       __int_type
+       operator|=(__int_type __i) volatile noexcept
+       { return __atomic_or_fetch(&_M_i, __i, memory_order_seq_cst); }
+ 
+       __int_type
+       operator^=(__int_type __i) noexcept
+       { return __atomic_xor_fetch(&_M_i, __i, memory_order_seq_cst); }
+ 
+       __int_type
+       operator^=(__int_type __i) volatile noexcept
+       { return __atomic_xor_fetch(&_M_i, __i, memory_order_seq_cst); }
+ 
+       bool
+       is_lock_free() const noexcept
+       { return __atomic_is_lock_free (sizeof (_M_i), &_M_i); }
+ 
+       bool
+       is_lock_free() const volatile noexcept
+       { return __atomic_is_lock_free (sizeof (_M_i), &_M_i); }
+ 
+       void
+       store(__int_type __i, memory_order __m = memory_order_seq_cst) noexcept
+       {
+ 	__glibcxx_assert(__m != memory_order_acquire);
+ 	__glibcxx_assert(__m != memory_order_acq_rel);
+ 	__glibcxx_assert(__m != memory_order_consume);
+ 
+ 	__atomic_store_n(&_M_i, __i, __m);
+       }
+ 
+       void
+       store(__int_type __i,
+ 	    memory_order __m = memory_order_seq_cst) volatile noexcept
+       {
+ 	__glibcxx_assert(__m != memory_order_acquire);
+ 	__glibcxx_assert(__m != memory_order_acq_rel);
+ 	__glibcxx_assert(__m != memory_order_consume);
+ 
+ 	__atomic_store_n(&_M_i, __i, __m);
+       }
+ 
+       __int_type
+       load(memory_order __m = memory_order_seq_cst) const noexcept
+       {
+ 	__glibcxx_assert(__m != memory_order_release);
+ 	__glibcxx_assert(__m != memory_order_acq_rel);
+ 
+ 	return __atomic_load_n(&_M_i, __m);
+       }
+ 
+       __int_type
+       load(memory_order __m = memory_order_seq_cst) const volatile noexcept
+       {
+ 	__glibcxx_assert(__m != memory_order_release);
+ 	__glibcxx_assert(__m != memory_order_acq_rel);
+ 
+ 	return __atomic_load_n(&_M_i, __m);
+       }
+ 
+       __int_type
+       exchange(__int_type __i,
+ 	       memory_order __m = memory_order_seq_cst) noexcept
+       {
+ 	return __atomic_exchange_n(&_M_i, __i, __m);
+       }
+ 
+ 
+       __int_type
+       exchange(__int_type __i,
+ 	       memory_order __m = memory_order_seq_cst) volatile noexcept
+       {
+ 	return __atomic_exchange_n(&_M_i, __i, __m);
+       }
+ 
+       bool
+       compare_exchange_weak(__int_type& __i1, __int_type __i2,
+ 			    memory_order __m1, memory_order __m2) noexcept
+       {
+ 	__glibcxx_assert(__m2 != memory_order_release);
+ 	__glibcxx_assert(__m2 != memory_order_acq_rel);
+ 	__glibcxx_assert(__m2 <= __m1);
+ 
+ 	return __atomic_compare_exchange_n(&_M_i, &__i1, __i2, 1, __m1, __m2);
+       }
+ 
+       bool
+       compare_exchange_weak(__int_type& __i1, __int_type __i2,
+ 			    memory_order __m1,
+ 			    memory_order __m2) volatile noexcept
+       {
+ 	__glibcxx_assert(__m2 != memory_order_release);
+ 	__glibcxx_assert(__m2 != memory_order_acq_rel);
+ 	__glibcxx_assert(__m2 <= __m1);
+ 
+ 	return __atomic_compare_exchange_n(&_M_i, &__i1, __i2, 1, __m1, __m2);
+       }
+ 
+       bool
+       compare_exchange_weak(__int_type& __i1, __int_type __i2,
+ 			    memory_order __m = memory_order_seq_cst) noexcept
+       {
+ 	return compare_exchange_weak(__i1, __i2, __m,
+ 				     __calculate_memory_order(__m));
+       }
+ 
+       bool
+       compare_exchange_weak(__int_type& __i1, __int_type __i2,
+ 		   memory_order __m = memory_order_seq_cst) volatile noexcept
+       {
+ 	return compare_exchange_weak(__i1, __i2, __m,
+ 				     __calculate_memory_order(__m));
+       }
+ 
+       bool
+       compare_exchange_strong(__int_type& __i1, __int_type __i2,
+ 			      memory_order __m1, memory_order __m2) noexcept
+       {
+ 	__glibcxx_assert(__m2 != memory_order_release);
+ 	__glibcxx_assert(__m2 != memory_order_acq_rel);
+ 	__glibcxx_assert(__m2 <= __m1);
+ 
+ 	return __atomic_compare_exchange_n(&_M_i, &__i1, __i2, 0, __m1, __m2);
+       }
+ 
+       bool
+       compare_exchange_strong(__int_type& __i1, __int_type __i2,
+ 			      memory_order __m1,
+ 			      memory_order __m2) volatile noexcept
+       {
+ 	__glibcxx_assert(__m2 != memory_order_release);
+ 	__glibcxx_assert(__m2 != memory_order_acq_rel);
+ 	__glibcxx_assert(__m2 <= __m1);
+ 
+ 	return __atomic_compare_exchange_n(&_M_i, &__i1, __i2, 0, __m1, __m2);
+       }
+ 
+       bool
+       compare_exchange_strong(__int_type& __i1, __int_type __i2,
+ 			      memory_order __m = memory_order_seq_cst) noexcept
+       {
+ 	return compare_exchange_strong(__i1, __i2, __m,
+ 				       __calculate_memory_order(__m));
+       }
+ 
+       bool
+       compare_exchange_strong(__int_type& __i1, __int_type __i2,
+ 		 memory_order __m = memory_order_seq_cst) volatile noexcept
+       {
+ 	return compare_exchange_strong(__i1, __i2, __m,
+ 				       __calculate_memory_order(__m));
+       }
+ 
+       __int_type
+       fetch_add(__int_type __i,
+ 		memory_order __m = memory_order_seq_cst) noexcept
+       { return __atomic_fetch_add(&_M_i, __i, __m); }
+ 
+       __int_type
+       fetch_add(__int_type __i,
+ 		memory_order __m = memory_order_seq_cst) volatile noexcept
+       { return __atomic_fetch_add(&_M_i, __i, __m); }
+ 
+       __int_type
+       fetch_sub(__int_type __i,
+ 		memory_order __m = memory_order_seq_cst) noexcept
+       { return __atomic_fetch_sub(&_M_i, __i, __m); }
+ 
+       __int_type
+       fetch_sub(__int_type __i,
+ 		memory_order __m = memory_order_seq_cst) volatile noexcept
+       { return __atomic_fetch_sub(&_M_i, __i, __m); }
+ 
+       __int_type
+       fetch_and(__int_type __i,
+ 		memory_order __m = memory_order_seq_cst) noexcept
+       { return __atomic_fetch_and(&_M_i, __i, __m); }
+ 
+       __int_type
+       fetch_and(__int_type __i,
+ 		memory_order __m = memory_order_seq_cst) volatile noexcept
+       { return __atomic_fetch_and(&_M_i, __i, __m); }
+ 
+       __int_type
+       fetch_or(__int_type __i,
+ 	       memory_order __m = memory_order_seq_cst) noexcept
+       { return __atomic_fetch_or(&_M_i, __i, __m); }
+ 
+       __int_type
+       fetch_or(__int_type __i,
+ 	       memory_order __m = memory_order_seq_cst) volatile noexcept
+       { return __atomic_fetch_or(&_M_i, __i, __m); }
+ 
+       __int_type
+       fetch_xor(__int_type __i,
+ 		memory_order __m = memory_order_seq_cst) noexcept
+       { return __atomic_fetch_xor(&_M_i, __i, __m); }
+ 
+       __int_type
+       fetch_xor(__int_type __i,
+ 		memory_order __m = memory_order_seq_cst) volatile noexcept
+       { return __atomic_fetch_xor(&_M_i, __i, __m); }
+     };
+ 
+ 
+   /// Partial specialization for pointer types.
+   template<typename _PTp>
+     struct __atomic_base<_PTp*>
+     {
+     private:
+       typedef _PTp* 	__pointer_type;
+ 
+       __pointer_type 	_M_p;
+ 
+     public:
+       __atomic_base() noexcept = default;
+       ~__atomic_base() noexcept = default;
+       __atomic_base(const __atomic_base&) = delete;
+       __atomic_base& operator=(const __atomic_base&) = delete;
+       __atomic_base& operator=(const __atomic_base&) volatile = delete;
+ 
+       // Requires __pointer_type convertible to _M_p.
+       constexpr __atomic_base(__pointer_type __p) noexcept : _M_p (__p) { }
+ 
+       operator __pointer_type() const noexcept
+       { return load(); }
+ 
+       operator __pointer_type() const volatile noexcept
+       { return load(); }
+ 
+       __pointer_type
+       operator=(__pointer_type __p) noexcept
+       {
+ 	store(__p);
+ 	return __p;
+       }
+ 
+       __pointer_type
+       operator=(__pointer_type __p) volatile noexcept
+       {
+ 	store(__p);
+ 	return __p;
+       }
+ 
+       __pointer_type
+       operator++(int) noexcept
+       { return fetch_add(1); }
+ 
+       __pointer_type
+       operator++(int) volatile noexcept
+       { return fetch_add(1); }
+ 
+       __pointer_type
+       operator--(int) noexcept
+       { return fetch_sub(1); }
+ 
+       __pointer_type
+       operator--(int) volatile noexcept
+       { return fetch_sub(1); }
+ 
+       __pointer_type
+       operator++() noexcept
+       { return __atomic_add_fetch(&_M_p, 1, memory_order_seq_cst); }
+ 
+       __pointer_type
+       operator++() volatile noexcept
+       { return __atomic_add_fetch(&_M_p, 1, memory_order_seq_cst); }
+ 
+       __pointer_type
+       operator--() noexcept
+       { return __atomic_sub_fetch(&_M_p, 1, memory_order_seq_cst); }
+ 
+       __pointer_type
+       operator--() volatile noexcept
+       { return __atomic_sub_fetch(&_M_p, 1, memory_order_seq_cst); }
+ 
+       __pointer_type
+       operator+=(ptrdiff_t __d) noexcept
+       { return __atomic_add_fetch(&_M_p, __d, memory_order_seq_cst); }
+ 
+       __pointer_type
+       operator+=(ptrdiff_t __d) volatile noexcept
+       { return __atomic_add_fetch(&_M_p, __d, memory_order_seq_cst); }
+ 
+       __pointer_type
+       operator-=(ptrdiff_t __d) noexcept
+       { return __atomic_sub_fetch(&_M_p, __d, memory_order_seq_cst); }
+ 
+       __pointer_type
+       operator-=(ptrdiff_t __d) volatile noexcept
+       { return __atomic_sub_fetch(&_M_p, __d, memory_order_seq_cst); }
+ 
+       bool
+       is_lock_free() const noexcept
+       { return __atomic_is_lock_free (sizeof (_M_p), &_M_p); }
+ 
+       bool
+       is_lock_free() const volatile noexcept
+       { return __atomic_is_lock_free (sizeof (_M_p), &_M_p); }
+ 
+       void
+       store(__pointer_type __p,
+ 	    memory_order __m = memory_order_seq_cst) noexcept
+       {
+ 	__glibcxx_assert(__m != memory_order_acquire);
+ 	__glibcxx_assert(__m != memory_order_acq_rel);
+ 	__glibcxx_assert(__m != memory_order_consume);
+ 
+ 	__atomic_store_n(&_M_p, __p, __m);
+       }
+ 
+       void
+       store(__pointer_type __p,
+ 	    memory_order __m = memory_order_seq_cst) volatile noexcept
+       {
+ 	__glibcxx_assert(__m != memory_order_acquire);
+ 	__glibcxx_assert(__m != memory_order_acq_rel);
+ 	__glibcxx_assert(__m != memory_order_consume);
+ 
+ 	__atomic_store_n(&_M_p, __p, __m);
+       }
+ 
+       __pointer_type
+       load(memory_order __m = memory_order_seq_cst) const noexcept
+       {
+ 	__glibcxx_assert(__m != memory_order_release);
+ 	__glibcxx_assert(__m != memory_order_acq_rel);
+ 
+ 	return __atomic_load_n(&_M_p, __m);
+       }
+ 
+       __pointer_type
+       load(memory_order __m = memory_order_seq_cst) const volatile noexcept
+       {
+ 	__glibcxx_assert(__m != memory_order_release);
+ 	__glibcxx_assert(__m != memory_order_acq_rel);
+ 
+ 	return __atomic_load_n(&_M_p, __m);
+       }
+ 
+       __pointer_type
+       exchange(__pointer_type __p,
+ 	       memory_order __m = memory_order_seq_cst) noexcept
+       {
+ 	return __atomic_exchange_n(&_M_p, __p, __m);
+       }
+ 
+ 
+       __pointer_type
+       exchange(__pointer_type __p,
+ 	       memory_order __m = memory_order_seq_cst) volatile noexcept
+       {
+ 	return __atomic_exchange_n(&_M_p, __p, __m);
+       }
+ 
+       bool
+       compare_exchange_strong(__pointer_type& __p1, __pointer_type __p2,
+ 			      memory_order __m1,
+ 			      memory_order __m2) noexcept
+       {
+ 	__glibcxx_assert(__m2 != memory_order_release);
+ 	__glibcxx_assert(__m2 != memory_order_acq_rel);
+ 	__glibcxx_assert(__m2 <= __m1);
+ 
+ 	return __atomic_compare_exchange_n(&_M_p, &__p1, __p2, 0, __m1, __m2);
+       }
+ 
+       bool
+       compare_exchange_strong(__pointer_type& __p1, __pointer_type __p2,
+ 			      memory_order __m1,
+ 			      memory_order __m2) volatile noexcept
+       {
+ 	__glibcxx_assert(__m2 != memory_order_release);
+ 	__glibcxx_assert(__m2 != memory_order_acq_rel);
+ 	__glibcxx_assert(__m2 <= __m1);
+ 
+ 	return __atomic_compare_exchange_n(&_M_p, &__p1, __p2, 0, __m1, __m2);
+       }
+ 
+       __pointer_type
+       fetch_add(ptrdiff_t __d,
+ 		memory_order __m = memory_order_seq_cst) noexcept
+       { return __atomic_fetch_add(&_M_p, __d, __m); }
+ 
+       __pointer_type
+       fetch_add(ptrdiff_t __d,
+ 		memory_order __m = memory_order_seq_cst) volatile noexcept
+       { return __atomic_fetch_add(&_M_p, __d, __m); }
+ 
+       __pointer_type
+       fetch_sub(ptrdiff_t __d,
+ 		memory_order __m = memory_order_seq_cst) noexcept
+       { return __atomic_fetch_sub(&_M_p, __d, __m); }
+ 
+       __pointer_type
+       fetch_sub(ptrdiff_t __d,
+ 		memory_order __m = memory_order_seq_cst) volatile noexcept
+       { return __atomic_fetch_sub(&_M_p, __d, __m); }
+     };
+ 
    // @} group atomics
  
  _GLIBCXX_END_NAMESPACE_VERSION
Index: include/Makefile.am
===================================================================
*** include/Makefile.am	(.../trunk/libstdc++-v3)	(revision 180780)
--- include/Makefile.am	(.../branches/cxx-mem-model/libstdc++-v3)	(revision 180832)
*************** bits_headers = \
*** 83,90 ****
  	${bits_srcdir}/alloc_traits.h \
  	${bits_srcdir}/allocator.h \
  	${bits_srcdir}/atomic_base.h \
- 	${bits_srcdir}/atomic_0.h \
- 	${bits_srcdir}/atomic_2.h \
  	${bits_srcdir}/basic_ios.h \
  	${bits_srcdir}/basic_ios.tcc \
  	${bits_srcdir}/basic_string.h \
--- 83,88 ----
Index: testsuite/29_atomics/headers/atomic/macros.cc
===================================================================
*** testsuite/29_atomics/headers/atomic/macros.cc	(.../trunk/libstdc++-v3)	(revision 180780)
--- testsuite/29_atomics/headers/atomic/macros.cc	(.../branches/cxx-mem-model/libstdc++-v3)	(revision 180832)
***************
*** 20,97 ****
  
  #include <atomic>
  
- namespace gnu
- {
  #ifndef ATOMIC_CHAR_LOCK_FREE 
  # error "ATOMIC_CHAR_LOCK_FREE must be a macro"
- #else
- # if ATOMIC_CHAR_LOCK_FREE != 0 \
-     && ATOMIC_CHAR_LOCK_FREE != 1 && ATOMIC_CHAR_LOCK_FREE != 2
- # error "ATOMIC_CHAR_LOCK_FREE must be 0, 1, or 2"
- # endif
  #endif
  
  #ifndef ATOMIC_CHAR16_T_LOCK_FREE 
  # error "ATOMIC_CHAR16_T_LOCK_FREE must be a macro"
- #else
- # if ATOMIC_CHAR16_T_LOCK_FREE != 0 \
-     && ATOMIC_CHAR16_T_LOCK_FREE != 1 && ATOMIC_CHAR16_T_LOCK_FREE != 2
- # error "ATOMIC_CHAR16_T_LOCK_FREE must be 0, 1, or 2"
- # endif
  #endif
  
  #ifndef ATOMIC_CHAR32_T_LOCK_FREE 
  # error "ATOMIC_CHAR32_T_LOCK_FREE must be a macro"
- #else
- # if ATOMIC_CHAR32_T_LOCK_FREE != 0 \
-     && ATOMIC_CHAR32_T_LOCK_FREE != 1 && ATOMIC_CHAR32_T_LOCK_FREE != 2
- # error "ATOMIC_CHAR32_T_LOCK_FREE must be 0, 1, or 2"
- # endif
  #endif
  
  #ifndef ATOMIC_WCHAR_T_LOCK_FREE 
  # error "ATOMIC_WCHAR_T_LOCK_FREE must be a macro"
- #else
- # if ATOMIC_WCHAR_T_LOCK_FREE != 0 \
-     && ATOMIC_WCHAR_T_LOCK_FREE != 1 && ATOMIC_WCHAR_T_LOCK_FREE != 2
- # error "ATOMIC_WCHAR_T_LOCK_FREE must be 0, 1, or 2"
- # endif
  #endif
  
  #ifndef ATOMIC_SHORT_LOCK_FREE 
  # error "ATOMIC_SHORT_LOCK_FREE must be a macro"
- #else
- # if ATOMIC_SHORT_LOCK_FREE != 0 \
-     && ATOMIC_SHORT_LOCK_FREE != 1 && ATOMIC_SHORT_LOCK_FREE != 2
- # error "ATOMIC_SHORT_LOCK_FREE must be 0, 1, or 2"
- # endif
  #endif
  
  #ifndef ATOMIC_INT_LOCK_FREE 
  # error "ATOMIC_INT_LOCK_FREE must be a macro"
- #else
- # if ATOMIC_INT_LOCK_FREE != 0 \
-     && ATOMIC_INT_LOCK_FREE != 1 && ATOMIC_INT_LOCK_FREE != 2
- # error "ATOMIC_INT_LOCK_FREE must be 0, 1, or 2"
- # endif
  #endif
  
  #ifndef ATOMIC_LONG_LOCK_FREE 
  # error "ATOMIC_LONG_LOCK_FREE must be a macro"
- #else
- # if ATOMIC_LONG_LOCK_FREE != 0 \
-     && ATOMIC_LONG_LOCK_FREE != 1 && ATOMIC_LONG_LOCK_FREE != 2
- # error "ATOMIC_LONG_LOCK_FREE must be 0, 1, or 2"
- # endif
  #endif
  
  #ifndef ATOMIC_LLONG_LOCK_FREE 
  # error "ATOMIC_LLONG_LOCK_FREE must be a macro"
- #else
- # if ATOMIC_LLONG_LOCK_FREE != 0 \
-     && ATOMIC_LLONG_LOCK_FREE != 1 && ATOMIC_LLONG_LOCK_FREE != 2
- # error "ATOMIC_LLONG_LOCK_FREE must be 0, 1, or 2"
- # endif
  #endif
  
  #ifndef ATOMIC_FLAG_INIT
--- 20,55 ----
*************** namespace gnu
*** 101,104 ****
--- 59,99 ----
  #ifndef ATOMIC_VAR_INIT
      #error "ATOMIC_VAR_INIT_must_be_a_macro"
  #endif
+ 
+ 
+ extern void abort(void);
+ 
+ main ()
+ {
+  if (ATOMIC_CHAR_LOCK_FREE != 0 && ATOMIC_CHAR_LOCK_FREE != 1
+      && ATOMIC_CHAR_LOCK_FREE != 2)
+    abort ();
+ 
+  if (ATOMIC_CHAR16_T_LOCK_FREE != 0 && ATOMIC_CHAR16_T_LOCK_FREE != 1
+      && ATOMIC_CHAR16_T_LOCK_FREE != 2)
+    abort ();
+ 
+  if (ATOMIC_CHAR32_T_LOCK_FREE != 0 && ATOMIC_CHAR32_T_LOCK_FREE != 1
+      && ATOMIC_CHAR32_T_LOCK_FREE != 2)
+    abort ();
+ 
+  if (ATOMIC_WCHAR_T_LOCK_FREE != 0 && ATOMIC_WCHAR_T_LOCK_FREE != 1
+      && ATOMIC_WCHAR_T_LOCK_FREE != 2)
+    abort ();
+ 
+  if (ATOMIC_SHORT_LOCK_FREE != 0 && ATOMIC_SHORT_LOCK_FREE != 1
+      && ATOMIC_SHORT_LOCK_FREE != 2)
+    abort ();
+ 
+  if (ATOMIC_INT_LOCK_FREE != 0 && ATOMIC_INT_LOCK_FREE != 1
+      && ATOMIC_INT_LOCK_FREE != 2)
+    abort ();
+ 
+  if (ATOMIC_LONG_LOCK_FREE != 0 && ATOMIC_LONG_LOCK_FREE != 1
+      && ATOMIC_LONG_LOCK_FREE != 2)
+    abort ();
+ 
+  if (ATOMIC_LLONG_LOCK_FREE != 0 && ATOMIC_LLONG_LOCK_FREE != 1
+      && ATOMIC_LLONG_LOCK_FREE != 2)
+    abort ();
  }
Index: testsuite/29_atomics/atomic/cons/user_pod.cc
===================================================================
*** testsuite/29_atomics/atomic/cons/user_pod.cc	(.../trunk/libstdc++-v3)	(revision 180780)
--- testsuite/29_atomics/atomic/cons/user_pod.cc	(.../branches/cxx-mem-model/libstdc++-v3)	(revision 180832)
***************
*** 1,7 ****
  // { dg-options "-std=gnu++0x" }
! // { dg-do link { xfail *-*-* } }
  
! // Copyright (C) 2009 Free Software Foundation, Inc.
  //
  // This file is part of the GNU ISO C++ Library.  This library is free
  // software; you can redistribute it and/or modify it under the
--- 1,7 ----
  // { dg-options "-std=gnu++0x" }
! // { dg-do link }
  
! // Copyright (C) 2009, 2011 Free Software Foundation, Inc.
  //
  // This file is part of the GNU ISO C++ Library.  This library is free
  // software; you can redistribute it and/or modify it under the
*************** struct dwordp
*** 29,35 ****
  void atomics()
  {
    std::atomic<dwordp> a;
!   bool b = a.is_lock_free(); // { dg-excess-errors "undefined reference to" }
  }
  
  int main()
--- 29,35 ----
  void atomics()
  {
    std::atomic<dwordp> a;
!   bool b = a.is_lock_free();
  }
  
  int main()
Index: testsuite/29_atomics/atomic/requirements/explicit_instantiation/1.cc
===================================================================
*** testsuite/29_atomics/atomic/requirements/explicit_instantiation/1.cc	(.../trunk/libstdc++-v3)	(revision 180780)
--- testsuite/29_atomics/atomic/requirements/explicit_instantiation/1.cc	(.../branches/cxx-mem-model/libstdc++-v3)	(revision 180832)
***************
*** 23,27 ****
  #include <atomic>
  #include <testsuite_character.h>
  
! template class std::atomic<__gnu_test::pod_char>;
  template class std::atomic<__gnu_test::pod_char*>;
--- 23,27 ----
  #include <atomic>
  #include <testsuite_character.h>
  
! template class std::atomic<__gnu_test::pod_state>;
  template class std::atomic<__gnu_test::pod_char*>;

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2011-11-11 23:14 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-11-03 23:52 cxx-mem-model merge [6 of 9] - libstdc++-v3 Andrew MacLeod
2011-11-04 18:17 ` Jeff Law
2011-11-04 18:53   ` Andrew MacLeod
2011-11-07  0:54 ` Hans-Peter Nilsson
2011-11-07  4:48   ` Andrew MacLeod
2011-11-07 11:36     ` Hans-Peter Nilsson
2011-11-07 14:41       ` Andrew MacLeod
2011-11-07 14:56       ` Andrew MacLeod
2011-11-07 15:38         ` Hans-Peter Nilsson
2011-11-07 16:28         ` Joseph S. Myers
2011-11-07 17:24           ` Andrew MacLeod
2011-11-07 17:43           ` Hans-Peter Nilsson
2011-11-07 18:27             ` Andrew MacLeod
2011-11-08  6:45               ` Hans-Peter Nilsson
2011-11-08 13:43                 ` Andrew MacLeod
2011-11-11 17:49                   ` Benjamin Kosnik
2011-11-11 17:56                     ` Andrew MacLeod
2011-11-11 21:07                       ` Hans-Peter Nilsson
2011-11-11 23:34                       ` Torvald Riegel
2011-11-11 20:27                     ` Hans-Peter Nilsson
2011-11-07 16:32         ` Richard Henderson
2011-11-08 20:22         ` Hans-Peter Nilsson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).