public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH] Define std::atomic_ref and std::atomic<floating-point> for C++20
@ 2019-07-11 19:45 Jonathan Wakely
  2019-07-12  9:30 ` Jonathan Wakely
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Jonathan Wakely @ 2019-07-11 19:45 UTC (permalink / raw)
  To: libstdc++, gcc-patches

[-- Attachment #1: Type: text/plain, Size: 1373 bytes --]

This adds the new atomic types from C++2a, as proposed by P0019 and
P0020. To reduce duplication the calls to the compiler's atomic
built-ins are wrapped in new functions in the __atomic_impl namespace.
These functions are currently only used by std::atomic<floating-point>
and std::atomic_ref but could also be used for all other specializations
of std::atomic.

	* include/bits/atomic_base.h (__atomic_impl): New namespace for
	wrappers around atomic built-ins.
	(__atomic_float, __atomic_ref): New class templates for use as base
	classes.
	* include/std/atomic (atomic<float>, atomic<double>)
	(atomic<long double>): New explicit specializations.
	(atomic_ref): New class template.
	(__cpp_lib_atomic_ref): Define.
	* include/std/version (__cpp_lib_atomic_ref): Define.
	* testsuite/29_atomics/atomic/60695.cc: Adjust dg-error.
    	* testsuite/29_atomics/atomic_float/1.cc: New test.
    	* testsuite/29_atomics/atomic_float/requirements.cc: New test.
    	* testsuite/29_atomics/atomic_ref/deduction.cc: New test.
    	* testsuite/29_atomics/atomic_ref/float.cc: New test.
    	* testsuite/29_atomics/atomic_ref/generic.cc: New test.
    	* testsuite/29_atomics/atomic_ref/integral.cc: New test.
    	* testsuite/29_atomics/atomic_ref/pointer.cc: New test.
    	* testsuite/29_atomics/atomic_ref/requirements.cc: New test.

Testted x86_64-linux, committed to trunk.


[-- Attachment #2: patch.txt --]
[-- Type: text/plain, Size: 90158 bytes --]

commit 6d63be4697285c362642b0e112441b75db4944ff
Author: redi <redi@138bc75d-0d04-0410-961f-82ee72b054a4>
Date:   Thu Jul 11 19:43:25 2019 +0000

    Define std::atomic_ref and std::atomic<floating-point> for C++20
    
    This adds the new atomic types from C++2a, as proposed by P0019 and
    P0020. To reduce duplication the calls to the compiler's atomic
    built-ins are wrapped in new functions in the __atomic_impl namespace.
    These functions are currently only used by std::atomic<floating-point>
    and std::atomic_ref but could also be used for all other specializations
    of std::atomic.
    
            * include/bits/atomic_base.h (__atomic_impl): New namespace for
            wrappers around atomic built-ins.
            (__atomic_float, __atomic_ref): New class templates for use as base
            classes.
            * include/std/atomic (atomic<float>, atomic<double>)
            (atomic<long double>): New explicit specializations.
            (atomic_ref): New class template.
            (__cpp_lib_atomic_ref): Define.
            * include/std/version (__cpp_lib_atomic_ref): Define.
            * testsuite/29_atomics/atomic/60695.cc: Adjust dg-error.
            * testsuite/29_atomics/atomic_float/1.cc: New test.
            * testsuite/29_atomics/atomic_float/requirements.cc: New test.
            * testsuite/29_atomics/atomic_ref/deduction.cc: New test.
            * testsuite/29_atomics/atomic_ref/float.cc: New test.
            * testsuite/29_atomics/atomic_ref/generic.cc: New test.
            * testsuite/29_atomics/atomic_ref/integral.cc: New test.
            * testsuite/29_atomics/atomic_ref/pointer.cc: New test.
            * testsuite/29_atomics/atomic_ref/requirements.cc: New test.
    
    git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@273420 138bc75d-0d04-0410-961f-82ee72b054a4

diff --git a/libstdc++-v3/include/bits/atomic_base.h b/libstdc++-v3/include/bits/atomic_base.h
index e30caef91bf..146e70a9f2e 100644
--- a/libstdc++-v3/include/bits/atomic_base.h
+++ b/libstdc++-v3/include/bits/atomic_base.h
@@ -35,6 +35,7 @@
 #include <bits/c++config.h>
 #include <stdint.h>
 #include <bits/atomic_lockfree_defines.h>
+#include <bits/move.h>
 
 #ifndef _GLIBCXX_ALWAYS_INLINE
 #define _GLIBCXX_ALWAYS_INLINE inline __attribute__((__always_inline__))
@@ -817,6 +818,876 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       { return __atomic_fetch_sub(&_M_p, _M_type_size(__d), int(__m)); }
     };
 
+#if __cplusplus > 201703L
+  // Implementation details of atomic_ref and atomic<floating-point>.
+  namespace __atomic_impl
+  {
+    // Remove volatile and create a non-deduced context for value arguments.
+    template<typename _Tp>
+      using _Val = remove_volatile_t<_Tp>;
+
+    // As above, but for difference_type arguments.
+    template<typename _Tp>
+      using _Diff = conditional_t<is_pointer_v<_Tp>, ptrdiff_t, _Val<_Tp>>;
+
+    template<size_t _Size, size_t _Align>
+      _GLIBCXX_ALWAYS_INLINE bool
+      is_lock_free() noexcept
+      {
+	// Produce a fake, minimally aligned pointer.
+	return __atomic_is_lock_free(_Size, reinterpret_cast<void *>(-_Align));
+      }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE void
+      store(_Tp* __ptr, _Val<_Tp> __t, memory_order __m) noexcept
+      { __atomic_store(__ptr, std::__addressof(__t), int(__m)); }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Tp
+      load(_Tp* __ptr, memory_order __m) noexcept
+      {
+	alignas(_Tp) unsigned char __buf[sizeof(_Tp)];
+	_Tp* __dest = reinterpret_cast<_Tp*>(__buf);
+	__atomic_load(__ptr, __dest, int(__m));
+	return *__dest;
+      }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Tp
+      exchange(_Tp* __ptr, _Val<_Tp> __desired, memory_order __m) noexcept
+      {
+        alignas(_Tp) unsigned char __buf[sizeof(_Tp)];
+	_Tp* __dest = reinterpret_cast<_Tp*>(__buf);
+	__atomic_exchange(__ptr, std::__addressof(__desired), __dest, int(__m));
+	return *__dest;
+      }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE bool
+      compare_exchange_weak(_Tp* __ptr, _Val<_Tp>& __expected,
+			    _Val<_Tp> __desired, memory_order __success,
+			    memory_order __failure) noexcept
+      {
+	return __atomic_compare_exchange(__ptr, std::__addressof(__expected),
+					 std::__addressof(__desired), true,
+					 int(__success), int(__failure));
+      }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE bool
+      compare_exchange_strong(_Tp* __ptr, _Val<_Tp>& __expected,
+			      _Val<_Tp> __desired, memory_order __success,
+			      memory_order __failure) noexcept
+      {
+	return __atomic_compare_exchange(__ptr, std::__addressof(__expected),
+					 std::__addressof(__desired), false,
+					 int(__success), int(__failure));
+      }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Tp
+      fetch_add(_Tp* __ptr, _Diff<_Tp> __i, memory_order __m) noexcept
+      { return __atomic_fetch_add(__ptr, __i, int(__m)); }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Tp
+      fetch_sub(_Tp* __ptr, _Diff<_Tp> __i, memory_order __m) noexcept
+      { return __atomic_fetch_sub(__ptr, __i, int(__m)); }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Tp
+      fetch_and(_Tp* __ptr, _Val<_Tp> __i, memory_order __m) noexcept
+      { return __atomic_fetch_and(__ptr, __i, int(__m)); }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Tp
+      fetch_or(_Tp* __ptr, _Val<_Tp> __i, memory_order __m) noexcept
+      { return __atomic_fetch_or(__ptr, __i, int(__m)); }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Tp
+      fetch_xor(_Tp* __ptr, _Val<_Tp> __i, memory_order __m) noexcept
+      { return __atomic_fetch_xor(__ptr, __i, int(__m)); }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Tp
+      __add_fetch(_Tp* __ptr, _Diff<_Tp> __i) noexcept
+      { return __atomic_add_fetch(__ptr, __i, __ATOMIC_SEQ_CST); }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Tp
+      __sub_fetch(_Tp* __ptr, _Diff<_Tp> __i) noexcept
+      { return __atomic_sub_fetch(__ptr, __i, __ATOMIC_SEQ_CST); }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Tp
+      __and_fetch(_Tp* __ptr, _Val<_Tp> __i) noexcept
+      { return __atomic_and_fetch(__ptr, __i, __ATOMIC_SEQ_CST); }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Tp
+      __or_fetch(_Tp* __ptr, _Val<_Tp> __i) noexcept
+      { return __atomic_or_fetch(__ptr, __i, __ATOMIC_SEQ_CST); }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Tp
+      __xor_fetch(_Tp* __ptr, _Val<_Tp> __i) noexcept
+      { return __atomic_xor_fetch(__ptr, __i, __ATOMIC_SEQ_CST); }
+
+    template<typename _Tp>
+      _Tp
+      __fetch_add_flt(_Tp* __ptr, _Val<_Tp> __i, memory_order __m) noexcept
+      {
+	_Val<_Tp> __oldval = load(__ptr, memory_order_relaxed);
+	_Val<_Tp> __newval = __oldval + __i;
+	while (!compare_exchange_weak(__ptr, __oldval, __newval, __m,
+				      memory_order_relaxed))
+	  __newval = __oldval + __i;
+	return __oldval;
+      }
+
+    template<typename _Tp>
+      _Tp
+      __fetch_sub_flt(_Tp* __ptr, _Val<_Tp> __i, memory_order __m) noexcept
+      {
+	_Val<_Tp> __oldval = load(__ptr, memory_order_relaxed);
+	_Val<_Tp> __newval = __oldval - __i;
+	while (!compare_exchange_weak(__ptr, __oldval, __newval, __m,
+				      memory_order_relaxed))
+	  __newval = __oldval - __i;
+	return __oldval;
+      }
+
+    template<typename _Tp>
+      _Tp
+      __add_fetch_flt(_Tp* __ptr, _Val<_Tp> __i) noexcept
+      {
+	_Val<_Tp> __oldval = load(__ptr, memory_order_relaxed);
+	_Val<_Tp> __newval = __oldval + __i;
+	while (!compare_exchange_weak(__ptr, __oldval, __newval,
+				      memory_order_seq_cst,
+				      memory_order_relaxed))
+	  __newval = __oldval + __i;
+	return __newval;
+      }
+
+    template<typename _Tp>
+      _Tp
+      __sub_fetch_flt(_Tp* __ptr, _Val<_Tp> __i) noexcept
+      {
+	_Val<_Tp> __oldval = load(__ptr, memory_order_relaxed);
+	_Val<_Tp> __newval = __oldval - __i;
+	while (!compare_exchange_weak(__ptr, __oldval, __newval,
+				      memory_order_seq_cst,
+				      memory_order_relaxed))
+	  __newval = __oldval - __i;
+	return __newval;
+      }
+  } // namespace __atomic_impl
+
+  // base class for atomic<floating-point-type>
+  template<typename _Fp>
+    struct __atomic_float
+    {
+      static_assert(is_floating_point_v<_Fp>);
+
+      static constexpr size_t _S_alignment = __alignof__(_Fp);
+
+    public:
+      using value_type = _Fp;
+      using difference_type = value_type;
+
+      static constexpr bool is_always_lock_free
+	= __atomic_always_lock_free(sizeof(_Fp), 0);
+
+      __atomic_float() = default;
+
+      constexpr
+      __atomic_float(_Fp __t) : _M_fp(__t)
+      { }
+
+      __atomic_float(const __atomic_float&) = delete;
+      __atomic_float& operator=(const __atomic_float&) = delete;
+      __atomic_float& operator=(const __atomic_float&) volatile = delete;
+
+      _Fp
+      operator=(_Fp __t) volatile noexcept
+      {
+	this->store(__t);
+	return __t;
+      }
+
+      _Fp
+      operator=(_Fp __t) noexcept
+      {
+	this->store(__t);
+	return __t;
+      }
+
+      bool
+      is_lock_free() const volatile noexcept
+      { return __atomic_impl::is_lock_free<sizeof(_Fp), _S_alignment>(); }
+
+      bool
+      is_lock_free() const noexcept
+      { return __atomic_impl::is_lock_free<sizeof(_Fp), _S_alignment>(); }
+
+      void
+      store(_Fp __t, memory_order __m = memory_order_seq_cst) volatile noexcept
+      { __atomic_impl::store(&_M_fp, __t, __m); }
+
+      void
+      store(_Fp __t, memory_order __m = memory_order_seq_cst) noexcept
+      { __atomic_impl::store(&_M_fp, __t, __m); }
+
+      _Fp
+      load(memory_order __m = memory_order_seq_cst) const volatile noexcept
+      { return __atomic_impl::load(&_M_fp, __m); }
+
+      _Fp
+      load(memory_order __m = memory_order_seq_cst) const noexcept
+      { return __atomic_impl::load(&_M_fp, __m); }
+
+      operator _Fp() const volatile noexcept { return this->load(); }
+      operator _Fp() const noexcept { return this->load(); }
+
+      _Fp
+      exchange(_Fp __desired,
+	       memory_order __m = memory_order_seq_cst) volatile noexcept
+      { return __atomic_impl::exchange(&_M_fp, __desired, __m); }
+
+      _Fp
+      exchange(_Fp __desired,
+	       memory_order __m = memory_order_seq_cst) noexcept
+      { return __atomic_impl::exchange(&_M_fp, __desired, __m); }
+
+      bool
+      compare_exchange_weak(_Fp& __expected, _Fp __desired,
+			    memory_order __success,
+			    memory_order __failure) noexcept
+      {
+	return __atomic_impl::compare_exchange_weak(&_M_fp,
+						    __expected, __desired,
+						    __success, __failure);
+      }
+
+      bool
+      compare_exchange_weak(_Fp& __expected, _Fp __desired,
+			    memory_order __success,
+			    memory_order __failure) volatile noexcept
+      {
+	return __atomic_impl::compare_exchange_weak(&_M_fp,
+						    __expected, __desired,
+						    __success, __failure);
+      }
+
+      bool
+      compare_exchange_strong(_Fp& __expected, _Fp __desired,
+			      memory_order __success,
+			      memory_order __failure) noexcept
+      {
+	return __atomic_impl::compare_exchange_strong(&_M_fp,
+						      __expected, __desired,
+						      __success, __failure);
+      }
+
+      bool
+      compare_exchange_strong(_Fp& __expected, _Fp __desired,
+			      memory_order __success,
+			      memory_order __failure) volatile noexcept
+      {
+	return __atomic_impl::compare_exchange_strong(&_M_fp,
+						      __expected, __desired,
+						      __success, __failure);
+      }
+
+      bool
+      compare_exchange_weak(_Fp& __expected, _Fp __desired,
+			    memory_order __order = memory_order_seq_cst)
+      noexcept
+      {
+	return compare_exchange_weak(__expected, __desired, __order,
+                                     __cmpexch_failure_order(__order));
+      }
+
+      bool
+      compare_exchange_weak(_Fp& __expected, _Fp __desired,
+			    memory_order __order = memory_order_seq_cst)
+      volatile noexcept
+      {
+	return compare_exchange_weak(__expected, __desired, __order,
+                                     __cmpexch_failure_order(__order));
+      }
+
+      bool
+      compare_exchange_strong(_Fp& __expected, _Fp __desired,
+			      memory_order __order = memory_order_seq_cst)
+      noexcept
+      {
+	return compare_exchange_strong(__expected, __desired, __order,
+				       __cmpexch_failure_order(__order));
+      }
+
+      bool
+      compare_exchange_strong(_Fp& __expected, _Fp __desired,
+			      memory_order __order = memory_order_seq_cst)
+      volatile noexcept
+      {
+	return compare_exchange_strong(__expected, __desired, __order,
+				       __cmpexch_failure_order(__order));
+      }
+
+      value_type
+      fetch_add(value_type __i,
+		memory_order __m = memory_order_seq_cst) noexcept
+      { return __atomic_impl::__fetch_add_flt(&_M_fp, __i, __m); }
+
+      value_type
+      fetch_add(value_type __i,
+		memory_order __m = memory_order_seq_cst) volatile noexcept
+      { return __atomic_impl::__fetch_add_flt(&_M_fp, __i, __m); }
+
+      value_type
+      fetch_sub(value_type __i,
+		memory_order __m = memory_order_seq_cst) noexcept
+      { return __atomic_impl::__fetch_sub_flt(&_M_fp, __i, __m); }
+
+      value_type
+      fetch_sub(value_type __i,
+		memory_order __m = memory_order_seq_cst) volatile noexcept
+      { return __atomic_impl::__fetch_sub_flt(&_M_fp, __i, __m); }
+
+      value_type
+      operator+=(value_type __i) noexcept
+      { return __atomic_impl::__add_fetch_flt(&_M_fp, __i); }
+
+      value_type
+      operator+=(value_type __i) volatile noexcept
+      { return __atomic_impl::__add_fetch_flt(&_M_fp, __i); }
+
+      value_type
+      operator-=(value_type __i) noexcept
+      { return __atomic_impl::__sub_fetch_flt(&_M_fp, __i); }
+
+      value_type
+      operator-=(value_type __i) volatile noexcept
+      { return __atomic_impl::__sub_fetch_flt(&_M_fp, __i); }
+
+    private:
+      alignas(_S_alignment) _Fp _M_fp;
+    };
+
+  template<typename _Tp,
+	   bool = is_integral_v<_Tp>, bool = is_floating_point_v<_Tp>>
+    struct __atomic_ref;
+
+  // base class for non-integral, non-floating-point, non-pointer types
+  template<typename _Tp>
+    struct __atomic_ref<_Tp, false, false>
+    {
+      static_assert(is_trivially_copyable_v<_Tp>);
+
+      // 1/2/4/8/16-byte types must be aligned to at least their size.
+      static constexpr int _S_min_alignment
+	= (sizeof(_Tp) & (sizeof(_Tp) - 1)) || sizeof(_Tp) > 16
+	? 0 : sizeof(_Tp);
+
+    public:
+      using value_type = _Tp;
+
+      static constexpr bool is_always_lock_free
+	= __atomic_always_lock_free(sizeof(_Tp), 0);
+
+      static constexpr size_t required_alignment
+	= _S_min_alignment > alignof(_Tp) ? _S_min_alignment : alignof(_Tp);
+
+      __atomic_ref& operator=(const __atomic_ref&) = delete;
+
+      explicit
+      __atomic_ref(_Tp& __t) : _M_ptr(std::__addressof(__t))
+      { __glibcxx_assert(((uintptr_t)_M_ptr % required_alignment) == 0); }
+
+      __atomic_ref(const __atomic_ref&) noexcept = default;
+
+      _Tp
+      operator=(_Tp __t) const noexcept
+      {
+	this->store(__t);
+	return __t;
+      }
+
+      operator _Tp() const noexcept { return this->load(); }
+
+      bool
+      is_lock_free() const noexcept
+      { return __atomic_impl::is_lock_free<sizeof(_Tp), required_alignment>(); }
+
+      void
+      store(_Tp __t, memory_order __m = memory_order_seq_cst) const noexcept
+      { __atomic_impl::store(_M_ptr, __t, __m); }
+
+      _Tp
+      load(memory_order __m = memory_order_seq_cst) const noexcept
+      { return __atomic_impl::load(_M_ptr, __m); }
+
+      _Tp
+      exchange(_Tp __desired, memory_order __m = memory_order_seq_cst)
+      const noexcept
+      { return __atomic_impl::exchange(_M_ptr, __desired, __m); }
+
+      bool
+      compare_exchange_weak(_Tp& __expected, _Tp __desired,
+			    memory_order __success,
+			    memory_order __failure) const noexcept
+      {
+	return __atomic_impl::compare_exchange_weak(_M_ptr,
+						    __expected, __desired,
+						    __success, __failure);
+      }
+
+      bool
+      compare_exchange_strong(_Tp& __expected, _Tp __desired,
+			    memory_order __success,
+			    memory_order __failure) const noexcept
+      {
+	return __atomic_impl::compare_exchange_strong(_M_ptr,
+						      __expected, __desired,
+						      __success, __failure);
+      }
+
+      bool
+      compare_exchange_weak(_Tp& __expected, _Tp __desired,
+			    memory_order __order = memory_order_seq_cst)
+      const noexcept
+      {
+	return compare_exchange_weak(__expected, __desired, __order,
+                                     __cmpexch_failure_order(__order));
+      }
+
+      bool
+      compare_exchange_strong(_Tp& __expected, _Tp __desired,
+			      memory_order __order = memory_order_seq_cst)
+      const noexcept
+      {
+	return compare_exchange_strong(__expected, __desired, __order,
+				       __cmpexch_failure_order(__order));
+      }
+
+    private:
+      _Tp* _M_ptr;
+    };
+
+  // base class for atomic_ref<integral-type>
+  template<typename _Tp>
+    struct __atomic_ref<_Tp, true, false>
+    {
+      static_assert(is_integral_v<_Tp>);
+
+    public:
+      using value_type = _Tp;
+      using difference_type = value_type;
+
+      static constexpr bool is_always_lock_free
+	= __atomic_always_lock_free(sizeof(_Tp), 0);
+
+      static constexpr size_t required_alignment
+	= sizeof(_Tp) > alignof(_Tp) ? sizeof(_Tp) : alignof(_Tp);
+
+      __atomic_ref() = delete;
+      __atomic_ref& operator=(const __atomic_ref&) = delete;
+
+      explicit
+      __atomic_ref(_Tp& __t) : _M_ptr(&__t)
+      { __glibcxx_assert(((uintptr_t)_M_ptr % required_alignment) == 0); }
+
+      __atomic_ref(const __atomic_ref&) noexcept = default;
+
+      _Tp
+      operator=(_Tp __t) const noexcept
+      {
+	this->store(__t);
+	return __t;
+      }
+
+      operator _Tp() const noexcept { return this->load(); }
+
+      bool
+      is_lock_free() const noexcept
+      {
+	return __atomic_impl::is_lock_free<sizeof(_Tp), required_alignment>();
+      }
+
+      void
+      store(_Tp __t, memory_order __m = memory_order_seq_cst) const noexcept
+      { __atomic_impl::store(_M_ptr, __t, __m); }
+
+      _Tp
+      load(memory_order __m = memory_order_seq_cst) const noexcept
+      { return __atomic_impl::load(_M_ptr, __m); }
+
+      _Tp
+      exchange(_Tp __desired,
+	       memory_order __m = memory_order_seq_cst) const noexcept
+      { return __atomic_impl::exchange(_M_ptr, __desired, __m); }
+
+      bool
+      compare_exchange_weak(_Tp& __expected, _Tp __desired,
+			    memory_order __success,
+			    memory_order __failure) const noexcept
+      {
+	return __atomic_impl::compare_exchange_weak(_M_ptr,
+						    __expected, __desired,
+						    __success, __failure);
+      }
+
+      bool
+      compare_exchange_strong(_Tp& __expected, _Tp __desired,
+			      memory_order __success,
+			      memory_order __failure) const noexcept
+      {
+	return __atomic_impl::compare_exchange_strong(_M_ptr,
+						      __expected, __desired,
+						      __success, __failure);
+      }
+
+      bool
+      compare_exchange_weak(_Tp& __expected, _Tp __desired,
+			    memory_order __order = memory_order_seq_cst)
+      const noexcept
+      {
+	return compare_exchange_weak(__expected, __desired, __order,
+                                     __cmpexch_failure_order(__order));
+      }
+
+      bool
+      compare_exchange_strong(_Tp& __expected, _Tp __desired,
+			      memory_order __order = memory_order_seq_cst)
+      const noexcept
+      {
+	return compare_exchange_strong(__expected, __desired, __order,
+				       __cmpexch_failure_order(__order));
+      }
+
+      value_type
+      fetch_add(value_type __i,
+		memory_order __m = memory_order_seq_cst) const noexcept
+      { return __atomic_impl::fetch_add(_M_ptr, __i, __m); }
+
+      value_type
+      fetch_sub(value_type __i,
+		memory_order __m = memory_order_seq_cst) const noexcept
+      { return __atomic_impl::fetch_sub(_M_ptr, __i, __m); }
+
+      value_type
+      fetch_and(value_type __i,
+		memory_order __m = memory_order_seq_cst) const noexcept
+      { return __atomic_impl::fetch_and(_M_ptr, __i, __m); }
+
+      value_type
+      fetch_or(value_type __i,
+	       memory_order __m = memory_order_seq_cst) const noexcept
+      { return __atomic_impl::fetch_or(_M_ptr, __i, __m); }
+
+      value_type
+      fetch_xor(value_type __i,
+		memory_order __m = memory_order_seq_cst) const noexcept
+      { return __atomic_impl::fetch_xor(_M_ptr, __i, __m); }
+
+      _GLIBCXX_ALWAYS_INLINE value_type
+      operator++(int) const noexcept
+      { return fetch_add(1); }
+
+      _GLIBCXX_ALWAYS_INLINE value_type
+      operator--(int) const noexcept
+      { return fetch_sub(1); }
+
+      value_type
+      operator++() const noexcept
+      { return __atomic_impl::__add_fetch(_M_ptr, value_type(1)); }
+
+      value_type
+      operator--() const noexcept
+      { return __atomic_impl::__sub_fetch(_M_ptr, value_type(1)); }
+
+      value_type
+      operator+=(value_type __i) const noexcept
+      { return __atomic_impl::__add_fetch(_M_ptr, __i); }
+
+      value_type
+      operator-=(value_type __i) const noexcept
+      { return __atomic_impl::__sub_fetch(_M_ptr, __i); }
+
+      value_type
+      operator&=(value_type __i) const noexcept
+      { return __atomic_impl::__and_fetch(_M_ptr, __i); }
+
+      value_type
+      operator|=(value_type __i) const noexcept
+      { return __atomic_impl::__or_fetch(_M_ptr, __i); }
+
+      value_type
+      operator^=(value_type __i) const noexcept
+      { return __atomic_impl::__xor_fetch(_M_ptr, __i); }
+
+    private:
+      _Tp* _M_ptr;
+    };
+
+  // base class for atomic_ref<floating-point-type>
+  template<typename _Fp>
+    struct __atomic_ref<_Fp, false, true>
+    {
+      static_assert(is_floating_point_v<_Fp>);
+
+    public:
+      using value_type = _Fp;
+      using difference_type = value_type;
+
+      static constexpr bool is_always_lock_free
+	= __atomic_always_lock_free(sizeof(_Fp), 0);
+
+      static constexpr size_t required_alignment = __alignof__(_Fp);
+
+      __atomic_ref() = delete;
+      __atomic_ref& operator=(const __atomic_ref&) = delete;
+
+      explicit
+      __atomic_ref(_Fp& __t) : _M_ptr(&__t)
+      { __glibcxx_assert(((uintptr_t)_M_ptr % required_alignment) == 0); }
+
+      __atomic_ref(const __atomic_ref&) noexcept = default;
+
+      _Fp
+      operator=(_Fp __t) const noexcept
+      {
+	this->store(__t);
+	return __t;
+      }
+
+      operator _Fp() const noexcept { return this->load(); }
+
+      bool
+      is_lock_free() const noexcept
+      {
+	return __atomic_impl::is_lock_free<sizeof(_Fp), required_alignment>();
+      }
+
+      void
+      store(_Fp __t, memory_order __m = memory_order_seq_cst) const noexcept
+      { __atomic_impl::store(_M_ptr, __t, __m); }
+
+      _Fp
+      load(memory_order __m = memory_order_seq_cst) const noexcept
+      { return __atomic_impl::load(_M_ptr, __m); }
+
+      _Fp
+      exchange(_Fp __desired,
+	       memory_order __m = memory_order_seq_cst) const noexcept
+      { return __atomic_impl::exchange(_M_ptr, __desired, __m); }
+
+      bool
+      compare_exchange_weak(_Fp& __expected, _Fp __desired,
+			    memory_order __success,
+			    memory_order __failure) const noexcept
+      {
+	return __atomic_impl::compare_exchange_weak(_M_ptr,
+						    __expected, __desired,
+						    __success, __failure);
+      }
+
+      bool
+      compare_exchange_strong(_Fp& __expected, _Fp __desired,
+			    memory_order __success,
+			    memory_order __failure) const noexcept
+      {
+	return __atomic_impl::compare_exchange_strong(_M_ptr,
+						      __expected, __desired,
+						      __success, __failure);
+      }
+
+      bool
+      compare_exchange_weak(_Fp& __expected, _Fp __desired,
+			    memory_order __order = memory_order_seq_cst)
+      const noexcept
+      {
+	return compare_exchange_weak(__expected, __desired, __order,
+                                     __cmpexch_failure_order(__order));
+      }
+
+      bool
+      compare_exchange_strong(_Fp& __expected, _Fp __desired,
+			      memory_order __order = memory_order_seq_cst)
+      const noexcept
+      {
+	return compare_exchange_strong(__expected, __desired, __order,
+				       __cmpexch_failure_order(__order));
+      }
+
+      value_type
+      fetch_add(value_type __i,
+		memory_order __m = memory_order_seq_cst) const noexcept
+      { return __atomic_impl::__fetch_add_flt(_M_ptr, __i, __m); }
+
+      value_type
+      fetch_sub(value_type __i,
+		memory_order __m = memory_order_seq_cst) const noexcept
+      { return __atomic_impl::__fetch_sub_flt(_M_ptr, __i, __m); }
+
+      value_type
+      operator+=(value_type __i) const noexcept
+      { return __atomic_impl::__add_fetch_flt(_M_ptr, __i); }
+
+      value_type
+      operator-=(value_type __i) const noexcept
+      { return __atomic_impl::__sub_fetch_flt(_M_ptr, __i); }
+
+    private:
+      _Fp* _M_ptr;
+    };
+
+  // base class for atomic_ref<pointer-type>
+  template<typename _Tp>
+    struct __atomic_ref<_Tp*, false, false>
+    {
+    public:
+      using value_type = _Tp*;
+      using difference_type = ptrdiff_t;
+
+      static constexpr bool is_always_lock_free = ATOMIC_POINTER_LOCK_FREE == 2;
+
+      static constexpr size_t required_alignment = __alignof__(_Tp*);
+
+      __atomic_ref() = delete;
+      __atomic_ref& operator=(const __atomic_ref&) = delete;
+
+      explicit
+      __atomic_ref(_Tp*& __t) : _M_ptr(std::__addressof(__t))
+      { __glibcxx_assert(((uintptr_t)_M_ptr % required_alignment) == 0); }
+
+      __atomic_ref(const __atomic_ref&) noexcept = default;
+
+      _Tp*
+      operator=(_Tp* __t) const noexcept
+      {
+	this->store(__t);
+	return __t;
+      }
+
+      operator _Tp*() const noexcept { return this->load(); }
+
+      bool
+      is_lock_free() const noexcept
+      {
+	return __atomic_impl::is_lock_free<sizeof(_Tp*), required_alignment>();
+      }
+
+      void
+      store(_Tp* __t, memory_order __m = memory_order_seq_cst) const noexcept
+      { __atomic_impl::store(_M_ptr, __t, __m); }
+
+      _Tp*
+      load(memory_order __m = memory_order_seq_cst) const noexcept
+      { return __atomic_impl::load(_M_ptr, __m); }
+
+      _Tp*
+      exchange(_Tp* __desired,
+	       memory_order __m = memory_order_seq_cst) const noexcept
+      { return __atomic_impl::exchange(_M_ptr, __desired, __m); }
+
+      bool
+      compare_exchange_weak(_Tp*& __expected, _Tp* __desired,
+			    memory_order __success,
+			    memory_order __failure) const noexcept
+      {
+	return __atomic_impl::compare_exchange_weak(_M_ptr,
+						    __expected, __desired,
+						    __success, __failure);
+      }
+
+      bool
+      compare_exchange_strong(_Tp*& __expected, _Tp* __desired,
+			    memory_order __success,
+			    memory_order __failure) const noexcept
+      {
+	return __atomic_impl::compare_exchange_strong(_M_ptr,
+						      __expected, __desired,
+						      __success, __failure);
+      }
+
+      bool
+      compare_exchange_weak(_Tp*& __expected, _Tp* __desired,
+			    memory_order __order = memory_order_seq_cst)
+      const noexcept
+      {
+	return compare_exchange_weak(__expected, __desired, __order,
+                                     __cmpexch_failure_order(__order));
+      }
+
+      bool
+      compare_exchange_strong(_Tp*& __expected, _Tp* __desired,
+			      memory_order __order = memory_order_seq_cst)
+      const noexcept
+      {
+	return compare_exchange_strong(__expected, __desired, __order,
+				       __cmpexch_failure_order(__order));
+      }
+
+      _GLIBCXX_ALWAYS_INLINE value_type
+      fetch_add(difference_type __d,
+		memory_order __m = memory_order_seq_cst) const noexcept
+      { return __atomic_impl::fetch_add(_M_ptr, _S_type_size(__d), __m); }
+
+      _GLIBCXX_ALWAYS_INLINE value_type
+      fetch_sub(difference_type __d,
+		memory_order __m = memory_order_seq_cst) const noexcept
+      { return __atomic_impl::fetch_sub(_M_ptr, _S_type_size(__d), __m); }
+
+      value_type
+      operator++(int) const noexcept
+      { return fetch_add(1); }
+
+      value_type
+      operator--(int) const noexcept
+      { return fetch_sub(1); }
+
+      value_type
+      operator++() const noexcept
+      {
+	return __atomic_impl::__add_fetch(_M_ptr, _S_type_size(1));
+      }
+
+      value_type
+      operator--() const noexcept
+      {
+	return __atomic_impl::__sub_fetch(_M_ptr, _S_type_size(1));
+      }
+
+      value_type
+      operator+=(difference_type __d) const noexcept
+      {
+	return __atomic_impl::__add_fetch(_M_ptr, _S_type_size(__d));
+      }
+
+      value_type
+      operator-=(difference_type __d) const noexcept
+      {
+	return __atomic_impl::__sub_fetch(_M_ptr, _S_type_size(__d));
+      }
+
+    private:
+      static constexpr ptrdiff_t
+      _S_type_size(ptrdiff_t __d) noexcept
+      {
+	static_assert(is_object_v<_Tp>);
+	return __d * sizeof(_Tp);
+      }
+
+      _Tp** _M_ptr;
+    };
+
+#endif // C++2a
+
   // @} group atomics
 
 _GLIBCXX_END_NAMESPACE_VERSION
diff --git a/libstdc++-v3/include/std/atomic b/libstdc++-v3/include/std/atomic
index 699431e9727..26d8d3946da 100644
--- a/libstdc++-v3/include/std/atomic
+++ b/libstdc++-v3/include/std/atomic
@@ -39,7 +39,6 @@
 #else
 
 #include <bits/atomic_base.h>
-#include <bits/move.h>
 
 namespace std _GLIBCXX_VISIBILITY(default)
 {
@@ -1472,6 +1471,71 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 		     __atomic_val_t<_ITp> __i) noexcept
     { return atomic_fetch_xor_explicit(__a, __i, memory_order_seq_cst); }
 
+#if __cplusplus > 201703L
+  template<>
+    struct atomic<float> : __atomic_float<float>
+    {
+      atomic() noexcept = default;
+
+      constexpr
+      atomic(float __fp) noexcept : __atomic_float<float>(__fp)
+      { }
+
+      atomic& operator=(const atomic&) volatile = delete;
+      atomic& operator=(const atomic&) = delete;
+
+      using __atomic_float<float>::operator=;
+    };
+
+  template<>
+    struct atomic<double> : __atomic_float<double>
+    {
+      atomic() noexcept = default;
+
+      constexpr
+      atomic(double __fp) noexcept : __atomic_float<double>(__fp)
+      { }
+
+      atomic& operator=(const atomic&) volatile = delete;
+      atomic& operator=(const atomic&) = delete;
+
+      using __atomic_float<double>::operator=;
+    };
+
+  template<>
+    struct atomic<long double> : __atomic_float<long double>
+    {
+      atomic() noexcept = default;
+
+      constexpr
+      atomic(long double __fp) noexcept : __atomic_float<long double>(__fp)
+      { }
+
+      atomic& operator=(const atomic&) volatile = delete;
+      atomic& operator=(const atomic&) = delete;
+
+      using __atomic_float<long double>::operator=;
+    };
+
+#define __cpp_lib_atomic_ref 201806L
+
+  /// Class template to provide atomic operations on a non-atomic variable.
+  template<typename _Tp>
+    struct atomic_ref : __atomic_ref<_Tp>
+    {
+      explicit
+      atomic_ref(_Tp& __t) noexcept : __atomic_ref<_Tp>(__t)
+      { }
+
+      atomic_ref& operator=(const atomic_ref&) = delete;
+
+      atomic_ref(const atomic_ref&) = default;
+
+      using __atomic_ref<_Tp>::operator=;
+    };
+
+#endif // C++2a
+
   // @} group atomics
 
 _GLIBCXX_END_NAMESPACE_VERSION
diff --git a/libstdc++-v3/include/std/version b/libstdc++-v3/include/std/version
index e300fc38bc7..d134f7fde01 100644
--- a/libstdc++-v3/include/std/version
+++ b/libstdc++-v3/include/std/version
@@ -150,6 +150,7 @@
 
 #if __cplusplus > 201703L
 // c++2a
+#define __cpp_lib_atomic_ref 201806L
 #define __cpp_lib_bind_front 201902L
 #define __cpp_lib_bounded_array_traits 201902L
 #if __cpp_impl_destroying_delete
diff --git a/libstdc++-v3/testsuite/29_atomics/atomic/60695.cc b/libstdc++-v3/testsuite/29_atomics/atomic/60695.cc
index 58d554cefc1..5065730dd91 100644
--- a/libstdc++-v3/testsuite/29_atomics/atomic/60695.cc
+++ b/libstdc++-v3/testsuite/29_atomics/atomic/60695.cc
@@ -27,4 +27,4 @@ struct X {
   char stuff[0]; // GNU extension, type has zero size
 };
 
-std::atomic<X> a;  // { dg-error "not supported" "" { target *-*-* } 194 }
+std::atomic<X> a;  // { dg-error "zero-sized types" "" { target *-*-* } 0 }
diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_float/1.cc b/libstdc++-v3/testsuite/29_atomics/atomic_float/1.cc
new file mode 100644
index 00000000000..bd0e353538d
--- /dev/null
+++ b/libstdc++-v3/testsuite/29_atomics/atomic_float/1.cc
@@ -0,0 +1,573 @@
+// Copyright (C) 2019 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// <http://www.gnu.org/licenses/>.
+
+// { dg-options "-std=gnu++2a" }
+// { dg-do run { target c++2a } }
+
+#include <atomic>
+#include <testsuite_hooks.h>
+
+void
+test01()
+{
+  const auto mo = std::memory_order_relaxed;
+  bool ok;
+  float expected;
+
+  if constexpr (std::atomic<float>::is_always_lock_free)
+  {
+    std::atomic<float> a0;
+    std::atomic<float> a1(1.0f);
+    ok = a0.is_lock_free();
+    a0 = a1.load();
+    VERIFY( a0.load() == a1.load() );
+    VERIFY( a0.load(mo) == a0.load() );
+    a0.store(0.5f);
+    a1.store(0.5f, mo);
+    VERIFY( a0.load() == a1.load() );
+    auto f0 = a0.exchange(12.5f);
+    auto f1 = a1.exchange(12.5f, mo);
+    VERIFY( a0 == 12.5f );
+    VERIFY( a0.load() == a1.load() );
+    VERIFY( f0 == 0.5f );
+    VERIFY( f0 == f1 );
+
+    expected = 12.5f;
+    while (!a0.compare_exchange_weak(expected, 1.6f, mo, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a0.load() == 1.6f );
+    VERIFY( expected == 12.5f );
+    expected = 1.5f;
+    ok = a1.compare_exchange_weak(expected, 1.6f, mo, mo);
+    VERIFY( !ok && a1.load() == 12.5f && expected == 12.5f );
+    VERIFY( expected == 12.5f );
+    expected = 1.6f;
+    ok = a0.compare_exchange_strong(expected, 3.2f, mo, mo);
+    VERIFY( ok && a0.load() == 3.2f );
+    VERIFY( expected == 1.6f );
+    expected = 1.5f;
+    ok = a1.compare_exchange_strong(expected, 3.2f, mo, mo);
+    VERIFY( !ok && a1.load() == 12.5f && expected == 12.5f );
+
+    expected = 3.2f;
+    while (!a0.compare_exchange_weak(expected, .64f))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a0.load() == .64f );
+    expected = 12.5f;
+    while (!a1.compare_exchange_weak(expected, 1.6f, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a1.load() == 1.6f );
+    expected = 0.5f;
+    ok = a0.compare_exchange_weak(expected, 3.2f);
+    VERIFY( !ok && a0.load() == .64f && expected == .64f );
+    expected = 0.5f;
+    ok = a1.compare_exchange_weak(expected, 3.2f, mo);
+    VERIFY( !ok && a1.load() == 1.6f && expected == 1.6f );
+
+    expected = .64f;
+    ok = a0.compare_exchange_strong(expected, 12.8f);
+    VERIFY( ok && a0.load() == 12.8f );
+    expected = 1.6f;
+    ok = a1.compare_exchange_strong(expected, 2.56f, mo);
+    VERIFY( ok && a1.load() == 2.56f );
+    expected = 0.5f;
+    ok = a0.compare_exchange_strong(expected, 3.2f);
+    VERIFY( !ok && a0.load() == 12.8f && expected == 12.8f );
+    expected = 0.5f;
+    ok = a1.compare_exchange_strong(expected, 3.2f, mo);
+    VERIFY( !ok && a1.load() == 2.56f && expected == 2.56f );
+
+    f0 = a0.fetch_add(1.2f);
+    VERIFY( f0 == 12.8f );
+    VERIFY( a0 == 14.0f );
+    f1 = a1.fetch_add(2.4f, mo);
+    VERIFY( f1 == 2.56f );
+    VERIFY( a1 == 4.96f );
+
+    f0 = a0.fetch_sub(1.2f);
+    VERIFY( f0 == 14.0f );
+    VERIFY( a0 == 12.8f );
+    f1 = a1.fetch_sub(3.5f, mo);
+    VERIFY( f1 == 4.96f );
+    VERIFY( a1 == 1.46f );
+
+    f0 = a0 += 1.2f;
+    VERIFY( f0 == 14.0f );
+    VERIFY( a0 == 14.0f );
+
+    f0 = a0 -= 0.8f;
+    VERIFY( f0 == 13.2f );
+    VERIFY( a0 == 13.2f );
+  }
+
+  // Repeat for volatile std::atomic<float>
+  if constexpr (std::atomic<float>::is_always_lock_free)
+  {
+    volatile std::atomic<float> a0;
+    volatile std::atomic<float> a1(1.0f);
+    ok = a0.is_lock_free();
+    a0 = a1.load();
+    VERIFY( a0.load() == a1.load() );
+    VERIFY( a0.load(mo) == a0.load() );
+    a0.store(0.5f);
+    a1.store(0.5f, mo);
+    VERIFY( a0.load() == a1.load() );
+    auto f0 = a0.exchange(12.5f);
+    auto f1 = a1.exchange(12.5f, mo);
+    VERIFY( a0 == 12.5f );
+    VERIFY( a0.load() == a1.load() );
+    VERIFY( f0 == 0.5f );
+    VERIFY( f0 == f1 );
+
+    expected = 12.5f;
+    while (!a0.compare_exchange_weak(expected, 1.6f, mo, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a0.load() == 1.6f );
+    VERIFY( expected == 12.5f );
+    expected = 1.5f;
+    ok = a1.compare_exchange_weak(expected, 1.6f, mo, mo);
+    VERIFY( !ok && a1.load() == 12.5f && expected == 12.5f );
+    VERIFY( expected == 12.5f );
+    expected = 1.6f;
+    ok = a0.compare_exchange_strong(expected, 3.2f, mo, mo);
+    VERIFY( ok && a0.load() == 3.2f );
+    VERIFY( expected == 1.6f );
+    expected = 1.5f;
+    ok = a1.compare_exchange_strong(expected, 3.2f, mo, mo);
+    VERIFY( !ok && a1.load() == 12.5f && expected == 12.5f );
+
+    expected = 3.2f;
+    while (!a0.compare_exchange_weak(expected, .64f))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a0.load() == .64f );
+    expected = 12.5f;
+    while (!a1.compare_exchange_weak(expected, 1.6f, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a1.load() == 1.6f );
+    expected = 0.5f;
+    ok = a0.compare_exchange_weak(expected, 3.2f);
+    VERIFY( !ok && a0.load() == .64f && expected == .64f );
+    expected = 0.5f;
+    ok = a1.compare_exchange_weak(expected, 3.2f, mo);
+    VERIFY( !ok && a1.load() == 1.6f && expected == 1.6f );
+
+    expected = .64f;
+    ok = a0.compare_exchange_strong(expected, 12.8f);
+    VERIFY( ok && a0.load() == 12.8f );
+    expected = 1.6f;
+    ok = a1.compare_exchange_strong(expected, 2.56f, mo);
+    VERIFY( ok && a1.load() == 2.56f );
+    expected = 0.5f;
+    ok = a0.compare_exchange_strong(expected, 3.2f);
+    VERIFY( !ok && a0.load() == 12.8f && expected == 12.8f );
+    expected = 0.5f;
+    ok = a1.compare_exchange_strong(expected, 3.2f, mo);
+    VERIFY( !ok && a1.load() == 2.56f && expected == 2.56f );
+
+    f0 = a0.fetch_add(1.2f);
+    VERIFY( f0 == 12.8f );
+    VERIFY( a0 == 14.0f );
+    f1 = a1.fetch_add(2.4f, mo);
+    VERIFY( f1 == 2.56f );
+    VERIFY( a1 == 4.96f );
+
+    f0 = a0.fetch_sub(1.2f);
+    VERIFY( f0 == 14.0f );
+    VERIFY( a0 == 12.8f );
+    f1 = a1.fetch_sub(3.5f, mo);
+    VERIFY( f1 == 4.96f );
+    VERIFY( a1 == 1.46f );
+
+    f0 = a0 += 1.2f;
+    VERIFY( f0 == 14.0f );
+    VERIFY( a0 == 14.0f );
+
+    f0 = a0 -= 0.8f;
+    VERIFY( f0 == 13.2f );
+    VERIFY( a0 == 13.2f );
+  }
+}
+
+void
+test02()
+{
+  const auto mo = std::memory_order_relaxed;
+  bool ok;
+  double expected;
+
+  if constexpr (std::atomic<double>::is_always_lock_free)
+  {
+    std::atomic<double> a0;
+    std::atomic<double> a1(1.0);
+    ok = a0.is_lock_free();
+    a0 = a1.load();
+    VERIFY( a0.load() == a1.load() );
+    VERIFY( a0.load(mo) == a0.load() );
+    a0.store(0.5);
+    a1.store(0.5, mo);
+    VERIFY( a0.load() == a1.load() );
+    auto f0 = a0.exchange(12.5);
+    auto f1 = a1.exchange(12.5, mo);
+    VERIFY( a0 == 12.5 );
+    VERIFY( a0.load() == a1.load() );
+    VERIFY( f0 == 0.5 );
+    VERIFY( f0 == f1 );
+
+    expected = 12.5;
+    while (!a0.compare_exchange_weak(expected, 1.6, mo, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a0.load() == 1.6 );
+    VERIFY( expected == 12.5 );
+    expected = 1.5;
+    ok = a1.compare_exchange_weak(expected, 1.6, mo, mo);
+    VERIFY( !ok && a1.load() == 12.5 && expected == 12.5 );
+    VERIFY( expected == 12.5 );
+    expected = 1.6;
+    ok = a0.compare_exchange_strong(expected, 3.2, mo, mo);
+    VERIFY( ok && a0.load() == 3.2 );
+    VERIFY( expected == 1.6 );
+    expected = 1.5;
+    ok = a1.compare_exchange_strong(expected, 3.2, mo, mo);
+    VERIFY( !ok && a1.load() == 12.5 && expected == 12.5 );
+
+    expected = 3.2;
+    while (!a0.compare_exchange_weak(expected, .64))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a0.load() == .64 );
+    expected = 12.5;
+    while (!a1.compare_exchange_weak(expected, 1.6, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a1.load() == 1.6 );
+    expected = 0.5;
+    ok = a0.compare_exchange_weak(expected, 3.2);
+    VERIFY( !ok && a0.load() == .64 && expected == .64 );
+    expected = 0.5;
+    ok = a1.compare_exchange_weak(expected, 3.2, mo);
+    VERIFY( !ok && a1.load() == 1.6 && expected == 1.6 );
+
+    expected = .64;
+    ok = a0.compare_exchange_strong(expected, 12.8);
+    VERIFY( ok && a0.load() == 12.8 );
+    expected = 1.6;
+    ok = a1.compare_exchange_strong(expected, 2.56, mo);
+    VERIFY( ok && a1.load() == 2.56 );
+    expected = 0.5;
+    ok = a0.compare_exchange_strong(expected, 3.2);
+    VERIFY( !ok && a0.load() == 12.8 && expected == 12.8 );
+    expected = 0.5;
+    ok = a1.compare_exchange_strong(expected, 3.2, mo);
+    VERIFY( !ok && a1.load() == 2.56 && expected == 2.56 );
+
+    f0 = a0.fetch_add(1.2);
+    VERIFY( f0 == 12.8 );
+    VERIFY( a0 == 14.0 );
+    f1 = a1.fetch_add(2.4, mo);
+    VERIFY( f1 == 2.56 );
+    VERIFY( a1 == 4.96 );
+
+    f0 = a0.fetch_sub(1.2);
+    VERIFY( f0 == 14.0 );
+    VERIFY( a0 == 12.8 );
+    f1 = a1.fetch_sub(3.5, mo);
+    VERIFY( f1 == 4.96 );
+    VERIFY( a1 == 1.46 );
+
+    f0 = a0 += 1.2;
+    VERIFY( f0 == 14.0 );
+    VERIFY( a0 == 14.0 );
+
+    f0 = a0 -= 0.8;
+    VERIFY( f0 == 13.2 );
+    VERIFY( a0 == 13.2 );
+  }
+
+  // Repeat for volatile std::atomic<double>
+  if constexpr (std::atomic<double>::is_always_lock_free)
+  {
+    volatile std::atomic<double> a0;
+    volatile std::atomic<double> a1(1.0);
+    ok = a0.is_lock_free();
+    a0 = a1.load();
+    VERIFY( a0.load() == a1.load() );
+    VERIFY( a0.load(mo) == a0.load() );
+    a0.store(0.5);
+    a1.store(0.5, mo);
+    VERIFY( a0.load() == a1.load() );
+    auto f0 = a0.exchange(12.5);
+    auto f1 = a1.exchange(12.5, mo);
+    VERIFY( a0 == 12.5 );
+    VERIFY( a0.load() == a1.load() );
+    VERIFY( f0 == 0.5 );
+    VERIFY( f0 == f1 );
+
+    expected = 12.5;
+    while (!a0.compare_exchange_weak(expected, 1.6, mo, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a0.load() == 1.6 );
+    VERIFY( expected == 12.5 );
+    expected = 1.5;
+    ok = a1.compare_exchange_weak(expected, 1.6, mo, mo);
+    VERIFY( !ok && a1.load() == 12.5 && expected == 12.5 );
+    VERIFY( expected == 12.5 );
+    expected = 1.6;
+    ok = a0.compare_exchange_strong(expected, 3.2, mo, mo);
+    VERIFY( ok && a0.load() == 3.2 );
+    VERIFY( expected == 1.6 );
+    expected = 1.5;
+    ok = a1.compare_exchange_strong(expected, 3.2, mo, mo);
+    VERIFY( !ok && a1.load() == 12.5 && expected == 12.5 );
+
+    expected = 3.2;
+    while (!a0.compare_exchange_weak(expected, .64))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a0.load() == .64 );
+    expected = 12.5;
+    while (!a1.compare_exchange_weak(expected, 1.6, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a1.load() == 1.6 );
+    expected = 0.5;
+    ok = a0.compare_exchange_weak(expected, 3.2);
+    VERIFY( !ok && a0.load() == .64 && expected == .64 );
+    expected = 0.5;
+    ok = a1.compare_exchange_weak(expected, 3.2, mo);
+    VERIFY( !ok && a1.load() == 1.6 && expected == 1.6 );
+
+    expected = .64;
+    ok = a0.compare_exchange_strong(expected, 12.8);
+    VERIFY( ok && a0.load() == 12.8 );
+    expected = 1.6;
+    ok = a1.compare_exchange_strong(expected, 2.56, mo);
+    VERIFY( ok && a1.load() == 2.56 );
+    expected = 0.5;
+    ok = a0.compare_exchange_strong(expected, 3.2);
+    VERIFY( !ok && a0.load() == 12.8 && expected == 12.8 );
+    expected = 0.5;
+    ok = a1.compare_exchange_strong(expected, 3.2, mo);
+    VERIFY( !ok && a1.load() == 2.56 && expected == 2.56 );
+
+    f0 = a0.fetch_add(1.2);
+    VERIFY( f0 == 12.8 );
+    VERIFY( a0 == 14.0 );
+    f1 = a1.fetch_add(2.4, mo);
+    VERIFY( f1 == 2.56 );
+    VERIFY( a1 == 4.96 );
+
+    f0 = a0.fetch_sub(1.2);
+    VERIFY( f0 == 14.0 );
+    VERIFY( a0 == 12.8 );
+    f1 = a1.fetch_sub(3.5, mo);
+    VERIFY( f1 == 4.96 );
+    VERIFY( a1 == 1.46 );
+
+    f0 = a0 += 1.2;
+    VERIFY( f0 == 14.0 );
+    VERIFY( a0 == 14.0 );
+
+    f0 = a0 -= 0.8;
+    VERIFY( f0 == 13.2 );
+    VERIFY( a0 == 13.2 );
+  }
+}
+
+void
+test03()
+{
+  const auto mo = std::memory_order_relaxed;
+  bool ok;
+  long double expected;
+
+  if constexpr (std::atomic<long double>::is_always_lock_free)
+  {
+    std::atomic<long double> a0;
+    std::atomic<long double> a1(1.0l);
+    ok = a0.is_lock_free();
+    a0 = a1.load();
+    VERIFY( a0.load() == a1.load() );
+    VERIFY( a0.load(mo) == a0.load() );
+    a0.store(0.5l);
+    a1.store(0.5l, mo);
+    VERIFY( a0.load() == a1.load() );
+    auto f0 = a0.exchange(12.5l);
+    auto f1 = a1.exchange(12.5l, mo);
+    VERIFY( a0 == 12.5l );
+    VERIFY( a0.load() == a1.load() );
+    VERIFY( f0 == 0.5l );
+    VERIFY( f0 == f1 );
+
+    expected = 12.5l;
+    while (!a0.compare_exchange_weak(expected, 1.6l, mo, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a0.load() == 1.6l );
+    VERIFY( expected == 12.5l );
+    expected = 1.5l;
+    ok = a1.compare_exchange_weak(expected, 1.6l, mo, mo);
+    VERIFY( !ok && a1.load() == 12.5l && expected == 12.5l );
+    VERIFY( expected == 12.5l );
+    expected = 1.6l;
+    ok = a0.compare_exchange_strong(expected, 3.2l, mo, mo);
+    VERIFY( ok && a0.load() == 3.2l );
+    VERIFY( expected == 1.6l );
+    expected = 1.5l;
+    ok = a1.compare_exchange_strong(expected, 3.2l, mo, mo);
+    VERIFY( !ok && a1.load() == 12.5l && expected == 12.5l );
+
+    expected = 3.2l;
+    while (!a0.compare_exchange_weak(expected, .64l))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a0.load() == .64l );
+    expected = 12.5l;
+    while (!a1.compare_exchange_weak(expected, 1.6l, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a1.load() == 1.6l );
+    expected = 0.5l;
+    ok = a0.compare_exchange_weak(expected, 3.2l);
+    VERIFY( !ok && a0.load() == .64l && expected == .64l );
+    expected = 0.5l;
+    ok = a1.compare_exchange_weak(expected, 3.2l, mo);
+    VERIFY( !ok && a1.load() == 1.6l && expected == 1.6l );
+
+    expected = .64l;
+    ok = a0.compare_exchange_strong(expected, 12.8l);
+    VERIFY( ok && a0.load() == 12.8l );
+    expected = 1.6l;
+    ok = a1.compare_exchange_strong(expected, 2.56l, mo);
+    VERIFY( ok && a1.load() == 2.56l );
+    expected = 0.5l;
+    ok = a0.compare_exchange_strong(expected, 3.2l);
+    VERIFY( !ok && a0.load() == 12.8l && expected == 12.8l );
+    expected = 0.5l;
+    ok = a1.compare_exchange_strong(expected, 3.2l, mo);
+    VERIFY( !ok && a1.load() == 2.56l && expected == 2.56l );
+
+    f0 = a0.fetch_add(1.2l);
+    VERIFY( f0 == 12.8l );
+    VERIFY( a0 == 14.0l );
+    f1 = a1.fetch_add(2.4l, mo);
+    VERIFY( f1 == 2.56l );
+    VERIFY( a1 == 4.96l );
+
+    f0 = a0.fetch_sub(1.2l);
+    VERIFY( f0 == 14.0l );
+    VERIFY( a0 == 12.8l );
+    f1 = a1.fetch_sub(3.5l, mo);
+    VERIFY( f1 == 4.96l );
+    VERIFY( a1 == 1.46l );
+
+    f0 = a0 += 1.2l;
+    VERIFY( f0 == 14.0l );
+    VERIFY( a0 == 14.0l );
+
+    f0 = a0 -= 0.8l;
+    VERIFY( f0 == 13.2l );
+    VERIFY( a0 == 13.2l );
+  }
+
+  // Repeat for volatile std::atomic<double>
+  if constexpr (std::atomic<long double>::is_always_lock_free)
+  {
+    volatile std::atomic<long double> a0;
+    volatile std::atomic<long double> a1(1.0l);
+    ok = a0.is_lock_free();
+    a0 = a1.load();
+    VERIFY( a0.load() == a1.load() );
+    VERIFY( a0.load(mo) == a0.load() );
+    a0.store(0.5l);
+    a1.store(0.5l, mo);
+    VERIFY( a0.load() == a1.load() );
+    auto f0 = a0.exchange(12.5l);
+    auto f1 = a1.exchange(12.5l, mo);
+    VERIFY( a0 == 12.5l );
+    VERIFY( a0.load() == a1.load() );
+    VERIFY( f0 == 0.5l );
+    VERIFY( f0 == f1 );
+
+    expected = 12.5l;
+    while (!a0.compare_exchange_weak(expected, 1.6l, mo, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a0.load() == 1.6l );
+    VERIFY( expected == 12.5l );
+    expected = 1.5l;
+    ok = a1.compare_exchange_weak(expected, 1.6l, mo, mo);
+    VERIFY( !ok && a1.load() == 12.5l && expected == 12.5l );
+    VERIFY( expected == 12.5l );
+    expected = 1.6l;
+    ok = a0.compare_exchange_strong(expected, 3.2l, mo, mo);
+    VERIFY( ok && a0.load() == 3.2l );
+    VERIFY( expected == 1.6l );
+    expected = 1.5l;
+    ok = a1.compare_exchange_strong(expected, 3.2l, mo, mo);
+    VERIFY( !ok && a1.load() == 12.5l && expected == 12.5l );
+
+    expected = 3.2l;
+    while (!a0.compare_exchange_weak(expected, .64l))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a0.load() == .64l );
+    expected = 12.5l;
+    while (!a1.compare_exchange_weak(expected, 1.6l, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a1.load() == 1.6l );
+    expected = 0.5l;
+    ok = a0.compare_exchange_weak(expected, 3.2l);
+    VERIFY( !ok && a0.load() == .64l && expected == .64l );
+    expected = 0.5l;
+    ok = a1.compare_exchange_weak(expected, 3.2l, mo);
+    VERIFY( !ok && a1.load() == 1.6l && expected == 1.6l );
+
+    expected = .64l;
+    ok = a0.compare_exchange_strong(expected, 12.8l);
+    VERIFY( ok && a0.load() == 12.8l );
+    expected = 1.6l;
+    ok = a1.compare_exchange_strong(expected, 2.56l, mo);
+    VERIFY( ok && a1.load() == 2.56l );
+    expected = 0.5l;
+    ok = a0.compare_exchange_strong(expected, 3.2l);
+    VERIFY( !ok && a0.load() == 12.8l && expected == 12.8l );
+    expected = 0.5l;
+    ok = a1.compare_exchange_strong(expected, 3.2l, mo);
+    VERIFY( !ok && a1.load() == 2.56l && expected == 2.56l );
+
+    f0 = a0.fetch_add(1.2l);
+    VERIFY( f0 == 12.8l );
+    VERIFY( a0 == 14.0l );
+    f1 = a1.fetch_add(2.4l, mo);
+    VERIFY( f1 == 2.56l );
+    VERIFY( a1 == 4.96l );
+
+    f0 = a0.fetch_sub(1.2l);
+    VERIFY( f0 == 14.0l );
+    VERIFY( a0 == 12.8l );
+    f1 = a1.fetch_sub(3.5l, mo);
+    VERIFY( f1 == 4.96l );
+    VERIFY( a1 == 1.46l );
+
+    f0 = a0 += 1.2l;
+    VERIFY( f0 == 14.0l );
+    VERIFY( a0 == 14.0l );
+
+    f0 = a0 -= 0.8l;
+    VERIFY( f0 == 13.2l );
+    VERIFY( a0 == 13.2l );
+  }
+}
+
+int
+main()
+{
+  test01();
+  test02();
+  test03();
+}
diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_float/requirements.cc b/libstdc++-v3/testsuite/29_atomics/atomic_float/requirements.cc
new file mode 100644
index 00000000000..e52608c451d
--- /dev/null
+++ b/libstdc++-v3/testsuite/29_atomics/atomic_float/requirements.cc
@@ -0,0 +1,69 @@
+// Copyright (C) 2019 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// <http://www.gnu.org/licenses/>.
+
+// { dg-options "-std=gnu++2a" }
+// { dg-do compile { target c++2a } }
+
+#include <atomic>
+
+void
+test01()
+{
+  using A = std::atomic<float>;
+  static_assert( std::is_standard_layout_v<A> );
+  static_assert( std::is_trivially_default_constructible_v<A> );
+  static_assert( std::is_trivially_destructible_v<A> );
+  static_assert( std::is_same_v<A::value_type, float> );
+  static_assert( std::is_same_v<A::difference_type, A::value_type> );
+  static_assert( !std::is_copy_constructible_v<A> );
+  static_assert( !std::is_move_constructible_v<A> );
+  static_assert( !std::is_copy_assignable_v<A> );
+  static_assert( !std::is_move_assignable_v<A> );
+  static_assert( !std::is_assignable_v<volatile A&, const A&> );
+}
+
+void
+test02()
+{
+  using A = std::atomic<double>;
+  static_assert( std::is_standard_layout_v<A> );
+  static_assert( std::is_trivially_default_constructible_v<A> );
+  static_assert( std::is_trivially_destructible_v<A> );
+  static_assert( std::is_same_v<A::value_type, double> );
+  static_assert( std::is_same_v<A::difference_type, A::value_type> );
+  static_assert( !std::is_copy_constructible_v<A> );
+  static_assert( !std::is_move_constructible_v<A> );
+  static_assert( !std::is_copy_assignable_v<A> );
+  static_assert( !std::is_move_assignable_v<A> );
+  static_assert( !std::is_assignable_v<volatile A&, const A&> );
+}
+
+void
+test03()
+{
+  using A = std::atomic<long double>;
+  static_assert( std::is_standard_layout_v<A> );
+  static_assert( std::is_trivially_default_constructible_v<A> );
+  static_assert( std::is_trivially_destructible_v<A> );
+  static_assert( std::is_same_v<A::value_type, long double> );
+  static_assert( std::is_same_v<A::difference_type, A::value_type> );
+  static_assert( !std::is_copy_constructible_v<A> );
+  static_assert( !std::is_move_constructible_v<A> );
+  static_assert( !std::is_copy_assignable_v<A> );
+  static_assert( !std::is_move_assignable_v<A> );
+  static_assert( !std::is_assignable_v<volatile A&, const A&> );
+}
diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/deduction.cc b/libstdc++-v3/testsuite/29_atomics/atomic_ref/deduction.cc
new file mode 100644
index 00000000000..231901cc422
--- /dev/null
+++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/deduction.cc
@@ -0,0 +1,41 @@
+// Copyright (C) 2019 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// <http://www.gnu.org/licenses/>.
+
+// { dg-options "-std=gnu++2a" }
+// { dg-do compile { target c++2a } }
+
+#include <atomic>
+
+void
+test01()
+{
+  int i = 0;
+  std::atomic_ref a0(i);
+  static_assert(std::is_same_v<decltype(a0), std::atomic_ref<int>>);
+
+  float f = 1.0f;
+  std::atomic_ref a1(f);
+  static_assert(std::is_same_v<decltype(a1), std::atomic_ref<float>>);
+
+  int* p = &i;
+  std::atomic_ref a2(p);
+  static_assert(std::is_same_v<decltype(a2), std::atomic_ref<int*>>);
+
+  struct X { } x;
+  std::atomic_ref a3(x);
+  static_assert(std::is_same_v<decltype(a3), std::atomic_ref<X>>);
+}
diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/float.cc b/libstdc++-v3/testsuite/29_atomics/atomic_ref/float.cc
new file mode 100644
index 00000000000..0633f28e254
--- /dev/null
+++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/float.cc
@@ -0,0 +1,320 @@
+// Copyright (C) 2019 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// <http://www.gnu.org/licenses/>.
+
+// { dg-options "-std=gnu++2a" }
+// { dg-do run { target c++2a } }
+
+#include <atomic>
+#include <testsuite_hooks.h>
+
+void
+test01()
+{
+  float value;
+  if constexpr (std::atomic_ref<float>::is_always_lock_free)
+  {
+    const auto mo = std::memory_order_relaxed;
+    std::atomic_ref<float> a(value);
+    bool ok = a.is_lock_free();
+    if constexpr (std::atomic_ref<float>::is_always_lock_free)
+      VERIFY( ok );
+    a = 1.6f;
+    VERIFY( a.load() == 1.6f );
+    a.store(0.8f);
+    VERIFY( a.load(mo) == 0.8f );
+    a.store(3.2f, mo);
+    VERIFY( a.load() == 3.2f );
+    auto v = a.exchange(6.4f);
+    VERIFY( a == 6.4f );
+    VERIFY( v == 3.2f );
+    v = a.exchange(1.28f, mo);
+    VERIFY( a == 1.28f );
+    VERIFY( v == 6.4f );
+
+    auto expected = a.load();
+    while (!a.compare_exchange_weak(expected, 25.6f, mo, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == 25.6f );
+    VERIFY( expected == 1.28f );
+    expected = 3.2f;
+    ok = a.compare_exchange_weak(expected, 51.2f, mo, mo);
+    VERIFY( !ok && a.load() == 25.6f && expected == 25.6f );
+    ok = a.compare_exchange_strong(expected, 51.2f, mo, mo);
+    VERIFY( ok && a.load() == 51.2f && expected == 25.6f );
+    expected = 0.0f;
+    ok = a.compare_exchange_strong(expected, 1.28f, mo, mo);
+    VERIFY( !ok && a.load() == 51.2f && expected == 51.2f );
+
+    while (!a.compare_exchange_weak(expected, 25.6f))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == 25.6f  && expected == 51.2f );
+    expected = a.load();
+    while (!a.compare_exchange_weak(expected, 10.24f, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == 10.24f && expected == 25.6f );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_weak(expected, 40.96f);
+    VERIFY( !ok && a.load() == 10.24f && expected == 10.24f );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_weak(expected, 40.96f, mo);
+    VERIFY( !ok && a.load() == 10.24f && expected == 10.24f );
+
+    ok = a.compare_exchange_strong(expected, 1.024f);
+    VERIFY( ok && a.load() == 1.024f && expected == 10.24f );
+    expected = a.load();
+    ok = a.compare_exchange_strong(expected, 204.8f, mo);
+    VERIFY( ok && a.load() == 204.8f && expected == 1.024f );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_strong(expected, 6.4f);
+    VERIFY( !ok && a.load() == 204.8f && expected == 204.8f );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_strong(expected, 6.4f, mo);
+    VERIFY( !ok && a.load() == 204.8f && expected == 204.8f );
+
+    v = a.fetch_add(3.2f);
+    VERIFY( v == 204.8f );
+    VERIFY( a == 208.0f );
+    v = a.fetch_add(-8.5f, mo);
+    VERIFY( v == 208.0f );
+    VERIFY( a == 199.5f );
+
+    v = a.fetch_sub(109.5f);
+    VERIFY( v == 199.5f );
+    VERIFY( a == 90.0f );
+    v = a.fetch_sub(2, mo);
+    VERIFY( v == 90.0f );
+    VERIFY( a == 88.0f );
+
+    v = a += 5.0f;
+    VERIFY( v == 93.0f );
+    VERIFY( a == 93.0f );
+
+    v = a -= 6.5f;
+    VERIFY( v == 86.5f );
+    VERIFY( a == 86.5f );
+  }
+
+  if constexpr (std::atomic_ref<float>::is_always_lock_free)
+    VERIFY( value == 86.5f );
+}
+
+void
+test02()
+{
+  double value;
+  if constexpr (std::atomic_ref<double>::is_always_lock_free)
+  {
+    const auto mo = std::memory_order_relaxed;
+    std::atomic_ref<double> a(value);
+    bool ok = a.is_lock_free();
+    if constexpr (std::atomic_ref<double>::is_always_lock_free)
+      VERIFY( ok );
+    a = 1.6;
+    VERIFY( a.load() == 1.6 );
+    a.store(0.8);
+    VERIFY( a.load(mo) == 0.8 );
+    a.store(3.2, mo);
+    VERIFY( a.load() == 3.2 );
+    auto v = a.exchange(6.4);
+    VERIFY( a == 6.4 );
+    VERIFY( v == 3.2 );
+    v = a.exchange(1.28, mo);
+    VERIFY( a == 1.28 );
+    VERIFY( v == 6.4 );
+
+    auto expected = a.load();
+    while (!a.compare_exchange_weak(expected, 25.6, mo, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == 25.6 );
+    VERIFY( expected == 1.28 );
+    expected = 3.2;
+    ok = a.compare_exchange_weak(expected, 51.2, mo, mo);
+    VERIFY( !ok && a.load() == 25.6 && expected == 25.6 );
+    ok = a.compare_exchange_strong(expected, 51.2, mo, mo);
+    VERIFY( ok && a.load() == 51.2 && expected == 25.6 );
+    expected = 0.0;
+    ok = a.compare_exchange_strong(expected, 1.28, mo, mo);
+    VERIFY( !ok && a.load() == 51.2 && expected == 51.2 );
+
+    while (!a.compare_exchange_weak(expected, 25.6))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == 25.6  && expected == 51.2 );
+    expected = a.load();
+    while (!a.compare_exchange_weak(expected, 10.24, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == 10.24 && expected == 25.6 );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_weak(expected, 40.96);
+    VERIFY( !ok && a.load() == 10.24 && expected == 10.24 );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_weak(expected, 40.96, mo);
+    VERIFY( !ok && a.load() == 10.24 && expected == 10.24 );
+
+    ok = a.compare_exchange_strong(expected, 1.024);
+    VERIFY( ok && a.load() == 1.024 && expected == 10.24 );
+    expected = a.load();
+    ok = a.compare_exchange_strong(expected, 204.8, mo);
+    VERIFY( ok && a.load() == 204.8 && expected == 1.024 );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_strong(expected, 6.4);
+    VERIFY( !ok && a.load() == 204.8 && expected == 204.8 );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_strong(expected, 6.4, mo);
+    VERIFY( !ok && a.load() == 204.8 && expected == 204.8 );
+
+    v = a.fetch_add(3.2);
+    VERIFY( v == 204.8 );
+    VERIFY( a == 208.0 );
+    v = a.fetch_add(-8.5, mo);
+    VERIFY( v == 208.0 );
+    VERIFY( a == 199.5 );
+
+    v = a.fetch_sub(109.5);
+    VERIFY( v == 199.5 );
+    VERIFY( a == 90.0 );
+    v = a.fetch_sub(2, mo);
+    VERIFY( v == 90.0 );
+    VERIFY( a == 88.0 );
+
+    v = a += 5.0;
+    VERIFY( v == 93.0 );
+    VERIFY( a == 93.0 );
+
+    v = a -= 6.5;
+    VERIFY( v == 86.5 );
+    VERIFY( a == 86.5 );
+  }
+
+  if constexpr (std::atomic_ref<double>::is_always_lock_free)
+    VERIFY( value == 86.5 );
+}
+
+void
+test03()
+{
+  long double value;
+  if constexpr (std::atomic_ref<long double>::is_always_lock_free)
+  {
+    const auto mo = std::memory_order_relaxed;
+    std::atomic_ref<long double> a(value);
+    bool ok = a.is_lock_free();
+    if constexpr (std::atomic_ref<long double>::is_always_lock_free)
+      VERIFY( ok );
+    a = 1.6l;
+    VERIFY( a.load() == 1.6l );
+    a.store(0.8l);
+    VERIFY( a.load(mo) == 0.8l );
+    a.store(3.2l, mo);
+    VERIFY( a.load() == 3.2l );
+    auto v = a.exchange(6.4l);
+    VERIFY( a == 6.4l );
+    VERIFY( v == 3.2l );
+    v = a.exchange(1.28l, mo);
+    VERIFY( a == 1.28l );
+    VERIFY( v == 6.4l );
+
+    auto expected = a.load();
+    while (!a.compare_exchange_weak(expected, 25.6l, mo, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == 25.6l );
+    VERIFY( expected == 1.28l );
+    expected = 3.2l;
+    ok = a.compare_exchange_weak(expected, 51.2l, mo, mo);
+    VERIFY( !ok && a.load() == 25.6l && expected == 25.6l );
+    ok = a.compare_exchange_strong(expected, 51.2l, mo, mo);
+    VERIFY( ok && a.load() == 51.2l && expected == 25.6l );
+    expected = 0.0l;
+    ok = a.compare_exchange_strong(expected, 1.28l, mo, mo);
+    VERIFY( !ok && a.load() == 51.2l && expected == 51.2l );
+
+    while (!a.compare_exchange_weak(expected, 25.6l))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == 25.6l  && expected == 51.2l );
+    expected = a.load();
+    while (!a.compare_exchange_weak(expected, 10.24l, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == 10.24l && expected == 25.6l );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_weak(expected, 40.96l);
+    VERIFY( !ok && a.load() == 10.24l && expected == 10.24l );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_weak(expected, 40.96l, mo);
+    VERIFY( !ok && a.load() == 10.24l && expected == 10.24l );
+
+    ok = a.compare_exchange_strong(expected, 1.024l);
+    VERIFY( ok && a.load() == 1.024l && expected == 10.24l );
+    expected = a.load();
+    ok = a.compare_exchange_strong(expected, 204.8l, mo);
+    VERIFY( ok && a.load() == 204.8l && expected == 1.024l );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_strong(expected, 6.4l);
+    VERIFY( !ok && a.load() == 204.8l && expected == 204.8l );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_strong(expected, 6.4l, mo);
+    VERIFY( !ok && a.load() == 204.8l && expected == 204.8l );
+
+    v = a.fetch_add(3.2l);
+    VERIFY( v == 204.8l );
+    VERIFY( a == 208.0l );
+    v = a.fetch_add(-8.5l, mo);
+    VERIFY( v == 208.0l );
+    VERIFY( a == 199.5l );
+
+    v = a.fetch_sub(109.5l);
+    VERIFY( v == 199.5l );
+    VERIFY( a == 90.0l );
+    v = a.fetch_sub(2, mo);
+    VERIFY( v == 90.0l );
+    VERIFY( a == 88.0l );
+
+    v = a += 5.0l;
+    VERIFY( v == 93.0l );
+    VERIFY( a == 93.0l );
+
+    v = a -= 6.5l;
+    VERIFY( v == 86.5l );
+    VERIFY( a == 86.5l );
+  }
+
+  if constexpr (std::atomic_ref<long double>::is_always_lock_free)
+    VERIFY( value == 86.5l );
+}
+
+void
+test04()
+{
+  if constexpr (std::atomic_ref<float>::is_always_lock_free)
+  {
+    float i = 0;
+    float* ptr = 0;
+    std::atomic_ref<float*> a0(ptr);
+    std::atomic_ref<float*> a1(ptr);
+    std::atomic_ref<float*> a2(a0);
+    a0 = &i;
+    VERIFY( a1 == &i );
+    VERIFY( a2 == &i );
+  }
+}
+
+int
+main()
+{
+  test01();
+  test02();
+  test03();
+  test04();
+}
diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/generic.cc b/libstdc++-v3/testsuite/29_atomics/atomic_ref/generic.cc
new file mode 100644
index 00000000000..61ae61bb3de
--- /dev/null
+++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/generic.cc
@@ -0,0 +1,122 @@
+// Copyright (C) 2019 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// <http://www.gnu.org/licenses/>.
+
+// { dg-options "-std=gnu++2a" }
+// { dg-do run { target c++2a } }
+// { dg-add-options libatomic }
+
+#include <atomic>
+#include <limits.h>
+#include <testsuite_hooks.h>
+
+struct X
+{
+  X() = default;
+  X(int i) : i(i) { }
+  bool operator==(int rhs) const { return i == rhs; }
+  int i;
+};
+
+void
+test01()
+{
+  X value;
+
+  {
+    const auto mo = std::memory_order_relaxed;
+    std::atomic_ref<X> a(value);
+    bool ok = a.is_lock_free();
+    if constexpr (std::atomic_ref<X>::is_always_lock_free)
+      VERIFY( ok );
+    a = X{};
+    VERIFY( a.load() == 0 );
+    VERIFY( a.load(mo) == 0 );
+    a.store(1);
+    VERIFY( a.load() == 1 );
+    auto v = a.exchange(2);
+    VERIFY( a.load() == 2 );
+    VERIFY( v == 1 );
+    v = a.exchange(3, mo);
+    VERIFY( a.load() == 3 );
+    VERIFY( v == 2 );
+
+    auto expected = a.load();
+    while (!a.compare_exchange_weak(expected, 4, mo, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == 4 );
+    VERIFY( expected == 3 );
+    expected = 1;
+    ok = a.compare_exchange_weak(expected, 5, mo, mo);
+    VERIFY( !ok && a.load() == 4 && expected == 4 );
+    ok = a.compare_exchange_strong(expected, 5, mo, mo);
+    VERIFY( ok && a.load() == 5 && expected == 4 );
+    expected = 0;
+    ok = a.compare_exchange_strong(expected, 3, mo, mo);
+    VERIFY( !ok && a.load() == 5 && expected == 5 );
+
+    while (!a.compare_exchange_weak(expected, 4))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == 4  && expected == 5 );
+    expected = a.load();
+    while (!a.compare_exchange_weak(expected, 6, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == 6 && expected == 4 );
+    expected = a.load();
+    expected.i += 1;
+    ok = a.compare_exchange_weak(expected, -8);
+    VERIFY( !ok && a.load() == 6 && expected == 6 );
+    expected = a.load();
+    expected.i += 1;
+    ok = a.compare_exchange_weak(expected, 8, mo);
+    VERIFY( !ok && a.load() == 6 && expected == 6 );
+
+    ok = a.compare_exchange_strong(expected, -6);
+    VERIFY( ok && a.load() == -6 && expected == 6 );
+    expected = a.load();
+    ok = a.compare_exchange_strong(expected, 7, mo);
+    VERIFY( ok && a.load() == 7 && expected == -6 );
+    expected = a.load();
+    expected.i += 1;
+    ok = a.compare_exchange_strong(expected, 2);
+    VERIFY( !ok && a.load() == 7 && expected == 7 );
+    expected = a.load();
+    expected.i += 1;
+    ok = a.compare_exchange_strong(expected, 2, mo);
+    VERIFY( !ok && a.load() == 7 && expected == 7 );
+  }
+
+  VERIFY( value == 7 );
+}
+
+void
+test02()
+{
+  X i;
+  std::atomic_ref<X> a0(i);
+  std::atomic_ref<X> a1(i);
+  std::atomic_ref<X> a2(a0);
+  a0 = 42;
+  VERIFY( a1.load() == 42 );
+  VERIFY( a2.load() == 42 );
+}
+
+int
+main()
+{
+  test01();
+  test02();
+}
diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/integral.cc b/libstdc++-v3/testsuite/29_atomics/atomic_ref/integral.cc
new file mode 100644
index 00000000000..4b5b4d11fd8
--- /dev/null
+++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/integral.cc
@@ -0,0 +1,331 @@
+// Copyright (C) 2019 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// <http://www.gnu.org/licenses/>.
+
+// { dg-options "-std=gnu++2a" }
+// { dg-do run { target c++2a } }
+// { dg-add-options libatomic }
+
+#include <atomic>
+#include <limits.h>
+#include <testsuite_hooks.h>
+
+void
+test01()
+{
+  int value;
+
+  {
+    const auto mo = std::memory_order_relaxed;
+    std::atomic_ref<int> a(value);
+    bool ok = a.is_lock_free();
+    if constexpr (std::atomic_ref<int>::is_always_lock_free)
+      VERIFY( ok );
+    a = 0;
+    VERIFY( a.load() == 0 );
+    VERIFY( a.load(mo) == 0 );
+    a.store(1);
+    VERIFY( a.load() == 1 );
+    auto v = a.exchange(2);
+    VERIFY( a == 2 );
+    VERIFY( v == 1 );
+    v = a.exchange(3, mo);
+    VERIFY( a == 3 );
+    VERIFY( v == 2 );
+
+    auto expected = a.load();
+    while (!a.compare_exchange_weak(expected, 4, mo, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == 4 );
+    VERIFY( expected == 3 );
+    expected = 1;
+    ok = a.compare_exchange_weak(expected, 5, mo, mo);
+    VERIFY( !ok && a.load() == 4 && expected == 4 );
+    ok = a.compare_exchange_strong(expected, 5, mo, mo);
+    VERIFY( ok && a.load() == 5 && expected == 4 );
+    expected = 0;
+    ok = a.compare_exchange_strong(expected, 3, mo, mo);
+    VERIFY( !ok && a.load() == 5 && expected == 5 );
+
+    while (!a.compare_exchange_weak(expected, 4))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == 4  && expected == 5 );
+    expected = a.load();
+    while (!a.compare_exchange_weak(expected, 6, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == 6 && expected == 4 );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_weak(expected, -8);
+    VERIFY( !ok && a.load() == 6 && expected == 6 );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_weak(expected, 8, mo);
+    VERIFY( !ok && a.load() == 6 && expected == 6 );
+
+    ok = a.compare_exchange_strong(expected, -6);
+    VERIFY( ok && a.load() == -6 && expected == 6 );
+    expected = a.load();
+    ok = a.compare_exchange_strong(expected, 7, mo);
+    VERIFY( ok && a.load() == 7 && expected == -6 );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_strong(expected, 2);
+    VERIFY( !ok && a.load() == 7 && expected == 7 );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_strong(expected, 2, mo);
+    VERIFY( !ok && a.load() == 7 && expected == 7 );
+
+    v = a.fetch_add(2);
+    VERIFY( v == 7 );
+    VERIFY( a == 9 );
+    v = a.fetch_add(-30, mo);
+    VERIFY( v == 9 );
+    VERIFY( a == -21 );
+
+    v = a.fetch_sub(3);
+    VERIFY( v == -21 );
+    VERIFY( a == -24 );
+    v = a.fetch_sub(-41, mo);
+    VERIFY( v == -24 );
+    VERIFY( a == 17 );
+
+    v = a.fetch_and(0x101);
+    VERIFY( v == 17 );
+    VERIFY( a == 1 );
+    a = 0x17;
+    v = a.fetch_and(0x23, mo);
+    VERIFY( v == 0x17 );
+    VERIFY( a == 3 );
+
+    v = a.fetch_or(0x101);
+    VERIFY( v == 3 );
+    VERIFY( a == 0x103 );
+    v = a.fetch_or(0x23, mo);
+    VERIFY( v == 0x103 );
+    VERIFY( a == 0x123 );
+
+    v = a.fetch_xor(0x101);
+    VERIFY( v == 0x123 );
+    VERIFY( a == 0x022 );
+    v = a.fetch_xor(0x123, mo);
+    VERIFY( v == 0x022 );
+    VERIFY( a == 0x101 );
+
+    v = a++;
+    VERIFY( v == 0x101 );
+    VERIFY( a == 0x102 );
+    v = a--;
+    VERIFY( v == 0x102 );
+    VERIFY( a == 0x101 );
+    v = ++a;
+    VERIFY( v == 0x102 );
+    VERIFY( a == 0x102 );
+    v = --a;
+    VERIFY( v == 0x101 );
+    VERIFY( a == 0x101 );
+
+    v = a += -10;
+    VERIFY( v == 247 );
+    VERIFY( a == 247 );
+
+    v = a -= 250;
+    VERIFY( v == -3 );
+    VERIFY( a == -3 );
+
+    a = 0x17;
+    v = a &= 0x102;
+    VERIFY( v == 2 );
+    VERIFY( a == 2 );
+
+    v = a |= 0x101;
+    VERIFY( v == 0x103 );
+    VERIFY( a == 0x103 );
+
+    v = a ^= 0x121;
+    VERIFY( v == 0x022 );
+    VERIFY( a == 0x022 );
+  }
+
+  VERIFY( value == 0x022 );
+}
+
+void
+test02()
+{
+  unsigned short value;
+
+  {
+    const auto mo = std::memory_order_relaxed;
+    std::atomic_ref<unsigned short> a(value);
+    bool ok = a.is_lock_free();
+    if constexpr (std::atomic_ref<unsigned short>::is_always_lock_free)
+      VERIFY( ok );
+    a = 0;
+    VERIFY( a.load() == 0 );
+    VERIFY( a.load(mo) == 0 );
+    a.store(1);
+    VERIFY( a.load() == 1 );
+    auto v = a.exchange(2);
+    VERIFY( a == 2 );
+    VERIFY( v == 1 );
+    v = a.exchange(3, mo);
+    VERIFY( a == 3 );
+    VERIFY( v == 2 );
+
+    auto expected = a.load();
+    while (!a.compare_exchange_weak(expected, 4, mo, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == 4 );
+    VERIFY( expected == 3 );
+    expected = 1;
+    ok = a.compare_exchange_weak(expected, 5, mo, mo);
+    VERIFY( !ok && a.load() == 4 && expected == 4 );
+    ok = a.compare_exchange_strong(expected, 5, mo, mo);
+    VERIFY( ok && a.load() == 5 && expected == 4 );
+    expected = 0;
+    ok = a.compare_exchange_strong(expected, 3, mo, mo);
+    VERIFY( !ok && a.load() == 5 && expected == 5 );
+
+    while (!a.compare_exchange_weak(expected, 4))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == 4  && expected == 5 );
+    expected = a.load();
+    while (!a.compare_exchange_weak(expected, 6, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == 6 && expected == 4 );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_weak(expected, -8);
+    VERIFY( !ok && a.load() == 6 && expected == 6 );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_weak(expected, 8, mo);
+    VERIFY( !ok && a.load() == 6 && expected == 6 );
+
+    ok = a.compare_exchange_strong(expected, -6);
+    VERIFY( ok && a.load() == (unsigned short)-6 && expected == 6 );
+    expected = a.load();
+    ok = a.compare_exchange_strong(expected, 7, mo);
+    VERIFY( ok && a.load() == 7 && expected == (unsigned short)-6 );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_strong(expected, 2);
+    VERIFY( !ok && a.load() == 7 && expected == 7 );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_strong(expected, 2, mo);
+    VERIFY( !ok && a.load() == 7 && expected == 7 );
+
+    v = a.fetch_add(2);
+    VERIFY( v == 7 );
+    VERIFY( a == 9 );
+    v = a.fetch_add(-30, mo);
+    VERIFY( v == 9 );
+    VERIFY( a == (unsigned short)-21 );
+
+    v = a.fetch_sub(3);
+    VERIFY( v == (unsigned short)-21 );
+    VERIFY( a == (unsigned short)-24 );
+    v = a.fetch_sub((unsigned short)-41, mo);
+    VERIFY( v == (unsigned short)-24 );
+    VERIFY( a == 17 );
+
+    v = a.fetch_and(0x21);
+    VERIFY( v == 17 );
+    VERIFY( a == 1 );
+    a = 0x17;
+    v = a.fetch_and(0x23, mo);
+    VERIFY( v == 0x17 );
+    VERIFY( a == 3 );
+
+    v = a.fetch_or(0x21);
+    VERIFY( v == 3 );
+    VERIFY( a == 0x23 );
+    v = a.fetch_or(0x44, mo);
+    VERIFY( v == 0x23 );
+    VERIFY( a == 0x67 );
+
+    v = a.fetch_xor(0x21);
+    VERIFY( v == 0x67 );
+    VERIFY( a == 0x46 );
+    v = a.fetch_xor(0x12, mo);
+    VERIFY( v == 0x46 );
+    VERIFY( a == 0x54 );
+
+    v = a++;
+    VERIFY( v == 0x54 );
+    VERIFY( a == 0x55 );
+    v = a--;
+    VERIFY( v == 0x55 );
+    VERIFY( a == 0x54 );
+    v = ++a;
+    VERIFY( v == 0x55 );
+    VERIFY( a == 0x55 );
+    v = --a;
+    VERIFY( v == 0x54 );
+    VERIFY( a == 0x54 );
+
+    v = a += -10;
+    VERIFY( v == 0x4a );
+    VERIFY( a == 0x4a );
+
+    v = a -= 15;
+    VERIFY( v == 0x3b );
+    VERIFY( a == 0x3b );
+
+    a = 0x17;
+    v = a &= 0x12;
+    VERIFY( v == 0x12 );
+    VERIFY( a == 0x12 );
+
+    v = a |= 0x34;
+    VERIFY( v == 0x36 );
+    VERIFY( a == 0x36 );
+
+    v = a ^= 0x12;
+    VERIFY( v == 0x24 );
+    VERIFY( a == 0x24 );
+  }
+
+  VERIFY( value == 0x24 );
+}
+void
+test03()
+{
+  int i = 0;
+  std::atomic_ref<int> a0(i);
+  std::atomic_ref<int> a1(i);
+  std::atomic_ref<int> a2(a0);
+  a0 = 42;
+  VERIFY( a1 == 42 );
+  VERIFY( a2 == 42 );
+}
+
+void
+test04()
+{
+  int i = INT_MIN;
+  std::atomic_ref<int> a(i);
+  --a;
+  VERIFY( a == INT_MAX );
+  ++a;
+  VERIFY( a == INT_MIN );
+  a |= INT_MAX;
+  VERIFY( a == -1 );
+}
+
+int
+main()
+{
+  test01();
+  test02();
+  test03();
+  test04();
+}
diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/pointer.cc b/libstdc++-v3/testsuite/29_atomics/atomic_ref/pointer.cc
new file mode 100644
index 00000000000..d5256d67622
--- /dev/null
+++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/pointer.cc
@@ -0,0 +1,225 @@
+// Copyright (C) 2019 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// <http://www.gnu.org/licenses/>.
+
+// { dg-options "-std=gnu++2a" }
+// { dg-do run { target c++2a } }
+// { dg-add-options libatomic }
+
+#include <atomic>
+#include <testsuite_hooks.h>
+
+void
+test01()
+{
+  long arr[10] = { };
+  long* value;
+
+  {
+    const auto mo = std::memory_order_relaxed;
+    std::atomic_ref<long*> a(value);
+    bool ok = a.is_lock_free();
+    if constexpr (std::atomic_ref<long*>::is_always_lock_free)
+      VERIFY( ok );
+    a = arr;
+    VERIFY( a.load() == arr );
+    VERIFY( a.load(mo) == arr );
+    a.store(arr+1);
+    VERIFY( a.load() == arr+1 );
+    auto v = a.exchange(arr+2);
+    VERIFY( a == arr+2 );
+    VERIFY( v == arr+1 );
+    v = a.exchange(arr+3, mo);
+    VERIFY( a == arr+3 );
+    VERIFY( v == arr+2 );
+
+    auto expected = a.load();
+    while (!a.compare_exchange_weak(expected, arr+4, mo, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == arr+4 );
+    VERIFY( expected == arr+3 );
+    expected = arr+1;
+    ok = a.compare_exchange_weak(expected, arr+5, mo, mo);
+    VERIFY( !ok && a.load() == arr+4 && expected == arr+4 );
+    ok = a.compare_exchange_strong(expected, arr+5, mo, mo);
+    VERIFY( ok && a.load() == arr+5 && expected == arr+4 );
+    expected = nullptr;
+    ok = a.compare_exchange_strong(expected, arr+3, mo, mo);
+    VERIFY( !ok && a.load() == arr+5 && expected == arr+5 );
+
+    while (!a.compare_exchange_weak(expected, arr+4))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == arr+4  && expected == arr+5 );
+    expected = a.load();
+    while (!a.compare_exchange_weak(expected, arr+6, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == arr+6 && expected == arr+4 );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_weak(expected, arr+8);
+    VERIFY( !ok && a.load() == arr+6 && expected == arr+6 );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_weak(expected, arr+8, mo);
+    VERIFY( !ok && a.load() == arr+6 && expected == arr+6 );
+
+    ok = a.compare_exchange_strong(expected, arr+5);
+    VERIFY( ok && a.load() == arr+5 && expected == arr+6 );
+    expected = a.load();
+    ok = a.compare_exchange_strong(expected, arr+7, mo);
+    VERIFY( ok && a.load() == arr+7 && expected == arr+5 );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_strong(expected, arr+2);
+    VERIFY( !ok && a.load() == arr+7 && expected == arr+7 );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_strong(expected, arr+2, mo);
+    VERIFY( !ok && a.load() == arr+7 && expected == arr+7 );
+
+    v = a.fetch_add(2);
+    VERIFY( v == arr+7 );
+    VERIFY( a == arr+9 );
+    v = a.fetch_add(-3, mo);
+    VERIFY( v == arr+9 );
+    VERIFY( a == arr+6 );
+
+    v = a.fetch_sub(3);
+    VERIFY( v == arr+6 );
+    VERIFY( a == arr+3 );
+    v = a.fetch_sub(2, mo);
+    VERIFY( v == arr+3 );
+    VERIFY( a == arr+1 );
+
+    v = a += 5;
+    VERIFY( v == arr+6 );
+    VERIFY( a == arr+6 );
+
+    v = a -= 5;
+    VERIFY( v == arr+1 );
+    VERIFY( a == arr+1 );
+  }
+
+  VERIFY( value == arr+1 );
+}
+
+void
+test02()
+{
+  char arr[10] = { };
+  char* value;
+
+  {
+    const auto mo = std::memory_order_relaxed;
+    std::atomic_ref<char*> a(value);
+    bool ok = a.is_lock_free();
+    if constexpr (std::atomic_ref<char*>::is_always_lock_free)
+      VERIFY( ok );
+    a = arr;
+    VERIFY( a.load() == arr );
+    a.store(arr+3);
+    VERIFY( a.load(mo) == arr+3 );
+    a.store(arr+1, mo);
+    VERIFY( a.load() == arr+1 );
+    auto v = a.exchange(arr+2);
+    VERIFY( a == arr+2 );
+    VERIFY( v == arr+1 );
+    v = a.exchange(arr+3, mo);
+    VERIFY( a == arr+3 );
+    VERIFY( v == arr+2 );
+
+    auto expected = a.load();
+    while (!a.compare_exchange_weak(expected, arr+4, mo, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == arr+4 );
+    VERIFY( expected == arr+3 );
+    expected = arr+1;
+    ok = a.compare_exchange_weak(expected, arr+5, mo, mo);
+    VERIFY( !ok && a.load() == arr+4 && expected == arr+4 );
+    ok = a.compare_exchange_strong(expected, arr+5, mo, mo);
+    VERIFY( ok && a.load() == arr+5 && expected == arr+4 );
+    expected = nullptr;
+    ok = a.compare_exchange_strong(expected, arr+3, mo, mo);
+    VERIFY( !ok && a.load() == arr+5 && expected == arr+5 );
+
+    while (!a.compare_exchange_weak(expected, arr+4))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == arr+4  && expected == arr+5 );
+    expected = a.load();
+    while (!a.compare_exchange_weak(expected, arr+6, mo))
+    { /* weak form can fail spuriously */ }
+    VERIFY( a.load() == arr+6 && expected == arr+4 );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_weak(expected, arr+8);
+    VERIFY( !ok && a.load() == arr+6 && expected == arr+6 );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_weak(expected, arr+8, mo);
+    VERIFY( !ok && a.load() == arr+6 && expected == arr+6 );
+
+    ok = a.compare_exchange_strong(expected, arr+5);
+    VERIFY( ok && a.load() == arr+5 && expected == arr+6 );
+    expected = a.load();
+    ok = a.compare_exchange_strong(expected, arr+7, mo);
+    VERIFY( ok && a.load() == arr+7 && expected == arr+5 );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_strong(expected, arr+2);
+    VERIFY( !ok && a.load() == arr+7 && expected == arr+7 );
+    expected = a.load() + 1;
+    ok = a.compare_exchange_strong(expected, arr+2, mo);
+    VERIFY( !ok && a.load() == arr+7 && expected == arr+7 );
+
+    v = a.fetch_add(2);
+    VERIFY( v == arr+7 );
+    VERIFY( a == arr+9 );
+    v = a.fetch_add(-3, mo);
+    VERIFY( v == arr+9 );
+    VERIFY( a == arr+6 );
+
+    v = a.fetch_sub(3);
+    VERIFY( v == arr+6 );
+    VERIFY( a == arr+3 );
+    v = a.fetch_sub(2, mo);
+    VERIFY( v == arr+3 );
+    VERIFY( a == arr+1 );
+
+    v = a += 5;
+    VERIFY( v == arr+6 );
+    VERIFY( a == arr+6 );
+
+    v = a -= 5;
+    VERIFY( v == arr+1 );
+    VERIFY( a == arr+1 );
+  }
+
+  VERIFY( value == arr+1 );
+}
+
+void
+test03()
+{
+  int i = 0;
+  int* ptr = 0;
+  std::atomic_ref<int*> a0(ptr);
+  std::atomic_ref<int*> a1(ptr);
+  std::atomic_ref<int*> a2(a0);
+  a0 = &i;
+  VERIFY( a1 == &i );
+  VERIFY( a2 == &i );
+}
+
+int
+main()
+{
+  test01();
+  test02();
+  test03();
+}
diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/requirements.cc b/libstdc++-v3/testsuite/29_atomics/atomic_ref/requirements.cc
new file mode 100644
index 00000000000..a3fd4505d0f
--- /dev/null
+++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/requirements.cc
@@ -0,0 +1,74 @@
+// Copyright (C) 2019 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// <http://www.gnu.org/licenses/>.
+
+// { dg-options "-std=gnu++2a" }
+// { dg-do compile { target c++2a } }
+
+#include <atomic>
+
+void
+test01()
+{
+  struct X { int c; };
+  using A = std::atomic_ref<X>;
+  static_assert( std::is_standard_layout_v<A> );
+  static_assert( std::is_nothrow_copy_constructible_v<A> );
+  static_assert( std::is_trivially_destructible_v<A> );
+  static_assert( std::is_same_v<A::value_type, X> );
+  static_assert( !std::is_copy_assignable_v<A> );
+  static_assert( !std::is_move_assignable_v<A> );
+}
+
+void
+test02()
+{
+  using A = std::atomic_ref<int>;
+  static_assert( std::is_standard_layout_v<A> );
+  static_assert( std::is_nothrow_copy_constructible_v<A> );
+  static_assert( std::is_trivially_destructible_v<A> );
+  static_assert( std::is_same_v<A::value_type, int> );
+  static_assert( std::is_same_v<A::difference_type, A::value_type> );
+  static_assert( !std::is_copy_assignable_v<A> );
+  static_assert( !std::is_move_assignable_v<A> );
+}
+
+void
+test03()
+{
+  using A = std::atomic_ref<double>;
+  static_assert( std::is_standard_layout_v<A> );
+  static_assert( std::is_nothrow_copy_constructible_v<A> );
+  static_assert( std::is_trivially_destructible_v<A> );
+  static_assert( std::is_same_v<A::value_type, double> );
+  static_assert( std::is_same_v<A::difference_type, A::value_type> );
+  static_assert( !std::is_copy_assignable_v<A> );
+  static_assert( !std::is_move_assignable_v<A> );
+}
+
+void
+test04()
+{
+  using A = std::atomic_ref<int*>;
+  static_assert( std::is_standard_layout_v<A> );
+  static_assert( std::is_nothrow_copy_constructible_v<A> );
+  static_assert( std::is_trivially_destructible_v<A> );
+  static_assert( std::is_same_v<A::value_type, int*> );
+  static_assert( std::is_same_v<A::difference_type, std::ptrdiff_t> );
+  static_assert( std::is_nothrow_copy_constructible_v<A> );
+  static_assert( !std::is_copy_assignable_v<A> );
+  static_assert( !std::is_move_assignable_v<A> );
+}

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] Define std::atomic_ref and std::atomic<floating-point> for C++20
  2019-07-11 19:45 [PATCH] Define std::atomic_ref and std::atomic<floating-point> for C++20 Jonathan Wakely
@ 2019-07-12  9:30 ` Jonathan Wakely
  2019-07-12 11:24 ` Jonathan Wakely
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Jonathan Wakely @ 2019-07-12  9:30 UTC (permalink / raw)
  To: libstdc++, gcc-patches

On 11/07/19 20:45 +0100, Jonathan Wakely wrote:
>This adds the new atomic types from C++2a, as proposed by P0019 and
>P0020. To reduce duplication the calls to the compiler's atomic
>built-ins are wrapped in new functions in the __atomic_impl namespace.
>These functions are currently only used by std::atomic<floating-point>
>and std::atomic_ref but could also be used for all other specializations
>of std::atomic.
>
>	* include/bits/atomic_base.h (__atomic_impl): New namespace for
>	wrappers around atomic built-ins.
>	(__atomic_float, __atomic_ref): New class templates for use as base
>	classes.
>	* include/std/atomic (atomic<float>, atomic<double>)
>	(atomic<long double>): New explicit specializations.
>	(atomic_ref): New class template.
>	(__cpp_lib_atomic_ref): Define.
>	* include/std/version (__cpp_lib_atomic_ref): Define.
>	* testsuite/29_atomics/atomic/60695.cc: Adjust dg-error.
>   	* testsuite/29_atomics/atomic_float/1.cc: New test.
>   	* testsuite/29_atomics/atomic_float/requirements.cc: New test.
>   	* testsuite/29_atomics/atomic_ref/deduction.cc: New test.
>   	* testsuite/29_atomics/atomic_ref/float.cc: New test.
>   	* testsuite/29_atomics/atomic_ref/generic.cc: New test.
>   	* testsuite/29_atomics/atomic_ref/integral.cc: New test.
>   	* testsuite/29_atomics/atomic_ref/pointer.cc: New test.
>   	* testsuite/29_atomics/atomic_ref/requirements.cc: New test.
>
>Testted x86_64-linux, committed to trunk.

I forgot to mention a couple of things about this patch.

For std::atomic<floating-point> and std::atomic_ref<floating-point>
I'm requiring the FP object to be aligned to __alignof__(T), not
alignof(T), which means that for IA-32 atomic<double> has stricter
alignment than a plain double (as is already the case for long long on
IA-32). This matches Clang's treatment of _Atomic double, but not
GCC's treatment of _Atomic double. I've tried to get the x86 psABI
group to specify the required alignment for atomics but there's still
no decision:
https://groups.google.com/forum/#!topic/ia32-abi/Tlu6Hs-ohPY
I'm more concerned about compatibility between our std::atomic and
libc++'s std::atomic (which is going to use Clang's _Atomic semantics)
than I am about compatibility between our std::atomic and GCC's C
_Atomic. That's why I decided to aim for compatibility with Clang not
GCC. Also, I think GCC gets alignment wrong for C _Atomic :-(

The C front-end supports _Atomic floating-point types, and seems to do
the right thing (including calling __atomic_feraiseexcept from
libatomic) but we can't use _Atomic in C++. It might make sense to
teach the C++ front-end to support _Atomic (maybe as an attribute, or
just using the _Atomic keyword directly as Clang++ does). That way the
C++ library wouldn't have to try and emulate what _Atomic does with
flakey template metaprogramming. But in the absence of compiler
support, atomic arithmetic on floating-point objects is implemented as
a relaxed atomic load, followed by non-atomic addition/subtraction on
a local variable, followed by a CAS loop to update the atomic object.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] Define std::atomic_ref and std::atomic<floating-point> for C++20
  2019-07-11 19:45 [PATCH] Define std::atomic_ref and std::atomic<floating-point> for C++20 Jonathan Wakely
  2019-07-12  9:30 ` Jonathan Wakely
@ 2019-07-12 11:24 ` Jonathan Wakely
  2019-07-12 12:11   ` Jonathan Wakely
  2019-07-12 11:44 ` Jonathan Wakely
  2019-07-12 15:56 ` Jonathan Wakely
  3 siblings, 1 reply; 6+ messages in thread
From: Jonathan Wakely @ 2019-07-12 11:24 UTC (permalink / raw)
  To: libstdc++, gcc-patches

[-- Attachment #1: Type: text/plain, Size: 649 bytes --]

On 11/07/19 20:45 +0100, Jonathan Wakely wrote:
>This adds the new atomic types from C++2a, as proposed by P0019 and
>P0020. To reduce duplication the calls to the compiler's atomic
>built-ins are wrapped in new functions in the __atomic_impl namespace.
>These functions are currently only used by std::atomic<floating-point>
>and std::atomic_ref but could also be used for all other specializations
>of std::atomic.

Here's a patch to reuse the new __atomic_impl functions in the
existing atomic<integral> and atomic<pointer> specializations (and
apply some general tidying up).

I don't plan to commit this yet, but I might do so at some point.



[-- Attachment #2: patch.txt --]
[-- Type: text/x-patch, Size: 71696 bytes --]

diff --git a/libstdc++-v3/include/bits/atomic_base.h b/libstdc++-v3/include/bits/atomic_base.h
index 146e70a9f2e..718eca7424a 100644
--- a/libstdc++-v3/include/bits/atomic_base.h
+++ b/libstdc++-v3/include/bits/atomic_base.h
@@ -230,6 +230,207 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     { return __i ? __GCC_ATOMIC_TEST_AND_SET_TRUEVAL : 0; }
   };
 
+  // Implementation details of std::atomic and std::atomic_ref
+  namespace __atomic_impl
+  {
+    // Remove volatile and create a non-deduced context for value arguments.
+    template<typename _Tp>
+      using _Val = typename remove_volatile<_Tp>::type;
+
+    // As above, but for difference_type arguments.
+    template<typename _Tp>
+      using _Diff = typename
+	conditional<is_pointer<_Tp>::value, ptrdiff_t, _Val<_Tp>>::type;
+
+    template<size_t _Size, size_t _Align>
+      _GLIBCXX_ALWAYS_INLINE bool
+      is_lock_free() noexcept
+      {
+	// Produce a fake, minimally aligned pointer.
+	return __atomic_is_lock_free(_Size, reinterpret_cast<void *>(-_Align));
+      }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE void
+      store(_Tp* __ptr, _Val<_Tp> __t, memory_order __m) noexcept
+      {
+	const memory_order __b __attribute__((__unused__))
+	  = __m & __memory_order_mask;
+	__glibcxx_assert(__b != memory_order_acquire);
+	__glibcxx_assert(__b != memory_order_acq_rel);
+	__glibcxx_assert(__b != memory_order_consume);
+
+	__atomic_store(__ptr, std::__addressof(__t), int(__m));
+      }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Val<_Tp>
+      load(_Tp* __ptr, memory_order __m) noexcept
+      {
+	const memory_order __b __attribute__((__unused__))
+	  = __m & __memory_order_mask;
+	__glibcxx_assert(__b != memory_order_release);
+	__glibcxx_assert(__b != memory_order_acq_rel);
+
+	alignas(_Tp) unsigned char __buf[sizeof(_Tp)];
+	auto __dest = reinterpret_cast<_Val<_Tp>*>(__buf);
+	__atomic_load(__ptr, __dest, int(__m));
+	return *__dest;
+      }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Val<_Tp>
+      exchange(_Tp* __ptr, _Val<_Tp> __desired, memory_order __m) noexcept
+      {
+	__glibcxx_assert((__m & __memory_order_mask) != memory_order_consume);
+
+        alignas(_Tp) unsigned char __buf[sizeof(_Tp)];
+	auto __dest = reinterpret_cast<_Val<_Tp>*>(__buf);
+	__atomic_exchange(__ptr, std::__addressof(__desired), __dest, int(__m));
+	return *__dest;
+      }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE bool
+      compare_exchange_weak(_Tp* __ptr, _Val<_Tp>& __expected,
+			    _Val<_Tp> __desired, memory_order __success,
+			    memory_order __failure) noexcept
+      {
+	const memory_order __bs __attribute__((__unused__))
+	  = __success & __memory_order_mask;
+	const memory_order __bf __attribute__((__unused__))
+	  = __failure & __memory_order_mask;
+	__glibcxx_assert(__bf != memory_order_release);
+	__glibcxx_assert(__bf != memory_order_acq_rel);
+	__glibcxx_assert(__bf <= __bs);
+
+	return __atomic_compare_exchange(__ptr, std::__addressof(__expected),
+					 std::__addressof(__desired), true,
+					 int(__success), int(__failure));
+      }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE bool
+      compare_exchange_strong(_Tp* __ptr, _Val<_Tp>& __expected,
+			      _Val<_Tp> __desired, memory_order __success,
+			      memory_order __failure) noexcept
+      {
+	const memory_order __succ __attribute__((__unused__))
+	  = __success & __memory_order_mask;
+	const memory_order __fail __attribute__((__unused__))
+	  = __failure & __memory_order_mask;
+	__glibcxx_assert(__fail != memory_order_release);
+	__glibcxx_assert(__fail != memory_order_acq_rel);
+	__glibcxx_assert(__fail <= __succ);
+
+	return __atomic_compare_exchange(__ptr, std::__addressof(__expected),
+					 std::__addressof(__desired), false,
+					 int(__success), int(__failure));
+      }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Tp
+      fetch_add(_Tp* __ptr, _Diff<_Tp> __i, memory_order __m) noexcept
+      { return __atomic_fetch_add(__ptr, __i, int(__m)); }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Tp
+      fetch_sub(_Tp* __ptr, _Diff<_Tp> __i, memory_order __m) noexcept
+      { return __atomic_fetch_sub(__ptr, __i, int(__m)); }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Tp
+      fetch_and(_Tp* __ptr, _Val<_Tp> __i, memory_order __m) noexcept
+      { return __atomic_fetch_and(__ptr, __i, int(__m)); }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Tp
+      fetch_or(_Tp* __ptr, _Val<_Tp> __i, memory_order __m) noexcept
+      { return __atomic_fetch_or(__ptr, __i, int(__m)); }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Tp
+      fetch_xor(_Tp* __ptr, _Val<_Tp> __i, memory_order __m) noexcept
+      { return __atomic_fetch_xor(__ptr, __i, int(__m)); }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Tp
+      __add_fetch(_Tp* __ptr, _Diff<_Tp> __i) noexcept
+      { return __atomic_add_fetch(__ptr, __i, __ATOMIC_SEQ_CST); }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Tp
+      __sub_fetch(_Tp* __ptr, _Diff<_Tp> __i) noexcept
+      { return __atomic_sub_fetch(__ptr, __i, __ATOMIC_SEQ_CST); }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Tp
+      __and_fetch(_Tp* __ptr, _Val<_Tp> __i) noexcept
+      { return __atomic_and_fetch(__ptr, __i, __ATOMIC_SEQ_CST); }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Tp
+      __or_fetch(_Tp* __ptr, _Val<_Tp> __i) noexcept
+      { return __atomic_or_fetch(__ptr, __i, __ATOMIC_SEQ_CST); }
+
+    template<typename _Tp>
+      _GLIBCXX_ALWAYS_INLINE _Tp
+      __xor_fetch(_Tp* __ptr, _Val<_Tp> __i) noexcept
+      { return __atomic_xor_fetch(__ptr, __i, __ATOMIC_SEQ_CST); }
+
+#if __cplusplus > 201703L
+    template<typename _Tp>
+      _Tp
+      __fetch_add_flt(_Tp* __ptr, _Val<_Tp> __i, memory_order __m) noexcept
+      {
+	_Val<_Tp> __oldval = load(__ptr, memory_order_relaxed);
+	_Val<_Tp> __newval = __oldval + __i;
+	while (!compare_exchange_weak(__ptr, __oldval, __newval, __m,
+				      memory_order_relaxed))
+	  __newval = __oldval + __i;
+	return __oldval;
+      }
+
+    template<typename _Tp>
+      _Tp
+      __fetch_sub_flt(_Tp* __ptr, _Val<_Tp> __i, memory_order __m) noexcept
+      {
+	_Val<_Tp> __oldval = load(__ptr, memory_order_relaxed);
+	_Val<_Tp> __newval = __oldval - __i;
+	while (!compare_exchange_weak(__ptr, __oldval, __newval, __m,
+				      memory_order_relaxed))
+	  __newval = __oldval - __i;
+	return __oldval;
+      }
+
+    template<typename _Tp>
+      _Tp
+      __add_fetch_flt(_Tp* __ptr, _Val<_Tp> __i) noexcept
+      {
+	_Val<_Tp> __oldval = load(__ptr, memory_order_relaxed);
+	_Val<_Tp> __newval = __oldval + __i;
+	while (!compare_exchange_weak(__ptr, __oldval, __newval,
+				      memory_order_seq_cst,
+				      memory_order_relaxed))
+	  __newval = __oldval + __i;
+	return __newval;
+      }
+
+    template<typename _Tp>
+      _Tp
+      __sub_fetch_flt(_Tp* __ptr, _Val<_Tp> __i) noexcept
+      {
+	_Val<_Tp> __oldval = load(__ptr, memory_order_relaxed);
+	_Val<_Tp> __newval = __oldval - __i;
+	while (!compare_exchange_weak(__ptr, __oldval, __newval,
+				      memory_order_seq_cst,
+				      memory_order_relaxed))
+	  __newval = __oldval - __i;
+	return __newval;
+      }
+#endif // C++2a
+  } // namespace __atomic_impl
+
 
   /// Base class for atomic integrals.
   //
@@ -262,12 +463,10 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       using difference_type = value_type;
 
     private:
-      typedef _ITp 	__int_type;
-
       static constexpr int _S_alignment =
 	sizeof(_ITp) > alignof(_ITp) ? sizeof(_ITp) : alignof(_ITp);
 
-      alignas(_S_alignment) __int_type _M_i;
+      alignas(_S_alignment) value_type _M_i;
 
     public:
       __atomic_base() noexcept = default;
@@ -276,206 +475,154 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       __atomic_base& operator=(const __atomic_base&) = delete;
       __atomic_base& operator=(const __atomic_base&) volatile = delete;
 
-      // Requires __int_type convertible to _M_i.
-      constexpr __atomic_base(__int_type __i) noexcept : _M_i (__i) { }
+      constexpr __atomic_base(value_type __i) noexcept : _M_i (__i) { }
 
-      operator __int_type() const noexcept
+      operator value_type() const noexcept
       { return load(); }
 
-      operator __int_type() const volatile noexcept
+      operator value_type() const volatile noexcept
       { return load(); }
 
-      __int_type
-      operator=(__int_type __i) noexcept
+      value_type
+      operator=(value_type __i) noexcept
       {
 	store(__i);
 	return __i;
       }
 
-      __int_type
-      operator=(__int_type __i) volatile noexcept
+      value_type
+      operator=(value_type __i) volatile noexcept
       {
 	store(__i);
 	return __i;
       }
 
-      __int_type
+      value_type
       operator++(int) noexcept
       { return fetch_add(1); }
 
-      __int_type
+      value_type
       operator++(int) volatile noexcept
       { return fetch_add(1); }
 
-      __int_type
+      value_type
       operator--(int) noexcept
       { return fetch_sub(1); }
 
-      __int_type
+      value_type
       operator--(int) volatile noexcept
       { return fetch_sub(1); }
 
-      __int_type
+      value_type
       operator++() noexcept
-      { return __atomic_add_fetch(&_M_i, 1, int(memory_order_seq_cst)); }
+      { return __atomic_impl::__add_fetch(&_M_i, 1); }
 
-      __int_type
+      value_type
       operator++() volatile noexcept
-      { return __atomic_add_fetch(&_M_i, 1, int(memory_order_seq_cst)); }
+      { return __atomic_impl::__add_fetch(&_M_i, 1); }
 
-      __int_type
+      value_type
       operator--() noexcept
-      { return __atomic_sub_fetch(&_M_i, 1, int(memory_order_seq_cst)); }
+      { return __atomic_impl::__sub_fetch(&_M_i, 1); }
 
-      __int_type
+      value_type
       operator--() volatile noexcept
-      { return __atomic_sub_fetch(&_M_i, 1, int(memory_order_seq_cst)); }
+      { return __atomic_impl::__sub_fetch(&_M_i, 1); }
 
-      __int_type
-      operator+=(__int_type __i) noexcept
-      { return __atomic_add_fetch(&_M_i, __i, int(memory_order_seq_cst)); }
+      value_type
+      operator+=(value_type __i) noexcept
+      { return __atomic_impl::__add_fetch(&_M_i, __i); }
 
-      __int_type
-      operator+=(__int_type __i) volatile noexcept
-      { return __atomic_add_fetch(&_M_i, __i, int(memory_order_seq_cst)); }
+      value_type
+      operator+=(value_type __i) volatile noexcept
+      { return __atomic_impl::__add_fetch(&_M_i, __i); }
 
-      __int_type
-      operator-=(__int_type __i) noexcept
-      { return __atomic_sub_fetch(&_M_i, __i, int(memory_order_seq_cst)); }
+      value_type
+      operator-=(value_type __i) noexcept
+      { return __atomic_impl::__sub_fetch(&_M_i, __i); }
 
-      __int_type
-      operator-=(__int_type __i) volatile noexcept
-      { return __atomic_sub_fetch(&_M_i, __i, int(memory_order_seq_cst)); }
+      value_type
+      operator-=(value_type __i) volatile noexcept
+      { return __atomic_impl::__sub_fetch(&_M_i, __i); }
 
-      __int_type
-      operator&=(__int_type __i) noexcept
-      { return __atomic_and_fetch(&_M_i, __i, int(memory_order_seq_cst)); }
+      value_type
+      operator&=(value_type __i) noexcept
+      { return __atomic_impl::__and_fetch(&_M_i, __i); }
 
-      __int_type
-      operator&=(__int_type __i) volatile noexcept
-      { return __atomic_and_fetch(&_M_i, __i, int(memory_order_seq_cst)); }
+      value_type
+      operator&=(value_type __i) volatile noexcept
+      { return __atomic_impl::__and_fetch(&_M_i, __i); }
 
-      __int_type
-      operator|=(__int_type __i) noexcept
-      { return __atomic_or_fetch(&_M_i, __i, int(memory_order_seq_cst)); }
+      value_type
+      operator|=(value_type __i) noexcept
+      { return __atomic_impl::__or_fetch(&_M_i, __i); }
 
-      __int_type
-      operator|=(__int_type __i) volatile noexcept
-      { return __atomic_or_fetch(&_M_i, __i, int(memory_order_seq_cst)); }
+      value_type
+      operator|=(value_type __i) volatile noexcept
+      { return __atomic_impl::__or_fetch(&_M_i, __i); }
 
-      __int_type
-      operator^=(__int_type __i) noexcept
-      { return __atomic_xor_fetch(&_M_i, __i, int(memory_order_seq_cst)); }
+      value_type
+      operator^=(value_type __i) noexcept
+      { return __atomic_impl::__xor_fetch(&_M_i, __i); }
 
-      __int_type
-      operator^=(__int_type __i) volatile noexcept
-      { return __atomic_xor_fetch(&_M_i, __i, int(memory_order_seq_cst)); }
+      value_type
+      operator^=(value_type __i) volatile noexcept
+      { return __atomic_impl::__xor_fetch(&_M_i, __i); }
 
       bool
       is_lock_free() const noexcept
-      {
-	// Use a fake, minimally aligned pointer.
-	return __atomic_is_lock_free(sizeof(_M_i),
-	    reinterpret_cast<void *>(-_S_alignment));
-      }
+      { return __atomic_impl::is_lock_free<sizeof(_M_i), _S_alignment>(); }
 
       bool
       is_lock_free() const volatile noexcept
-      {
-	// Use a fake, minimally aligned pointer.
-	return __atomic_is_lock_free(sizeof(_M_i),
-	    reinterpret_cast<void *>(-_S_alignment));
-      }
+      { return __atomic_impl::is_lock_free<sizeof(_M_i), _S_alignment>(); }
 
       _GLIBCXX_ALWAYS_INLINE void
-      store(__int_type __i, memory_order __m = memory_order_seq_cst) noexcept
-      {
-	memory_order __b = __m & __memory_order_mask;
-	__glibcxx_assert(__b != memory_order_acquire);
-	__glibcxx_assert(__b != memory_order_acq_rel);
-	__glibcxx_assert(__b != memory_order_consume);
-
-	__atomic_store_n(&_M_i, __i, int(__m));
-      }
+      store(value_type __i, memory_order __m = memory_order_seq_cst) noexcept
+      { __atomic_impl::store(&_M_i, __i, __m); }
 
       _GLIBCXX_ALWAYS_INLINE void
-      store(__int_type __i,
+      store(value_type __i,
 	    memory_order __m = memory_order_seq_cst) volatile noexcept
-      {
-	memory_order __b = __m & __memory_order_mask;
-	__glibcxx_assert(__b != memory_order_acquire);
-	__glibcxx_assert(__b != memory_order_acq_rel);
-	__glibcxx_assert(__b != memory_order_consume);
+      { __atomic_impl::store(&_M_i, __i, __m); }
 
-	__atomic_store_n(&_M_i, __i, int(__m));
-      }
-
-      _GLIBCXX_ALWAYS_INLINE __int_type
+      _GLIBCXX_ALWAYS_INLINE value_type
       load(memory_order __m = memory_order_seq_cst) const noexcept
-      {
-	memory_order __b = __m & __memory_order_mask;
-	__glibcxx_assert(__b != memory_order_release);
-	__glibcxx_assert(__b != memory_order_acq_rel);
-
-	return __atomic_load_n(&_M_i, int(__m));
-      }
+      { return __atomic_impl::load(&_M_i, __m); }
 
-      _GLIBCXX_ALWAYS_INLINE __int_type
+      _GLIBCXX_ALWAYS_INLINE value_type
       load(memory_order __m = memory_order_seq_cst) const volatile noexcept
-      {
-	memory_order __b = __m & __memory_order_mask;
-	__glibcxx_assert(__b != memory_order_release);
-	__glibcxx_assert(__b != memory_order_acq_rel);
+      { return __atomic_impl::load(&_M_i, __m); }
 
-	return __atomic_load_n(&_M_i, int(__m));
-      }
-
-      _GLIBCXX_ALWAYS_INLINE __int_type
-      exchange(__int_type __i,
+      _GLIBCXX_ALWAYS_INLINE value_type
+      exchange(value_type __i,
 	       memory_order __m = memory_order_seq_cst) noexcept
-      {
-	return __atomic_exchange_n(&_M_i, __i, int(__m));
-      }
-
+      { return __atomic_impl::exchange(&_M_i, __i, __m); }
 
-      _GLIBCXX_ALWAYS_INLINE __int_type
-      exchange(__int_type __i,
+      _GLIBCXX_ALWAYS_INLINE value_type
+      exchange(value_type __i,
 	       memory_order __m = memory_order_seq_cst) volatile noexcept
-      {
-	return __atomic_exchange_n(&_M_i, __i, int(__m));
-      }
+      { return __atomic_impl::exchange(&_M_i, __i, __m); }
 
       _GLIBCXX_ALWAYS_INLINE bool
-      compare_exchange_weak(__int_type& __i1, __int_type __i2,
+      compare_exchange_weak(value_type& __i1, value_type __i2,
 			    memory_order __m1, memory_order __m2) noexcept
       {
-	memory_order __b2 = __m2 & __memory_order_mask;
-	memory_order __b1 = __m1 & __memory_order_mask;
-	__glibcxx_assert(__b2 != memory_order_release);
-	__glibcxx_assert(__b2 != memory_order_acq_rel);
-	__glibcxx_assert(__b2 <= __b1);
-
-	return __atomic_compare_exchange_n(&_M_i, &__i1, __i2, 1,
-					   int(__m1), int(__m2));
+	return __atomic_impl::compare_exchange_weak(&_M_i, __i1, __i2,
+						    __m1, __m2);
       }
 
       _GLIBCXX_ALWAYS_INLINE bool
-      compare_exchange_weak(__int_type& __i1, __int_type __i2,
+      compare_exchange_weak(value_type& __i1, value_type __i2,
 			    memory_order __m1,
 			    memory_order __m2) volatile noexcept
       {
-	memory_order __b2 = __m2 & __memory_order_mask;
-	memory_order __b1 = __m1 & __memory_order_mask;
-	__glibcxx_assert(__b2 != memory_order_release);
-	__glibcxx_assert(__b2 != memory_order_acq_rel);
-	__glibcxx_assert(__b2 <= __b1);
-
-	return __atomic_compare_exchange_n(&_M_i, &__i1, __i2, 1,
-					   int(__m1), int(__m2));
+	return __atomic_impl::compare_exchange_weak(&_M_i, __i1, __i2,
+						    __m1, __m2);
       }
 
       _GLIBCXX_ALWAYS_INLINE bool
-      compare_exchange_weak(__int_type& __i1, __int_type __i2,
+      compare_exchange_weak(value_type& __i1, value_type __i2,
 			    memory_order __m = memory_order_seq_cst) noexcept
       {
 	return compare_exchange_weak(__i1, __i2, __m,
@@ -483,7 +630,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       }
 
       _GLIBCXX_ALWAYS_INLINE bool
-      compare_exchange_weak(__int_type& __i1, __int_type __i2,
+      compare_exchange_weak(value_type& __i1, value_type __i2,
 		   memory_order __m = memory_order_seq_cst) volatile noexcept
       {
 	return compare_exchange_weak(__i1, __i2, __m,
@@ -491,37 +638,24 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       }
 
       _GLIBCXX_ALWAYS_INLINE bool
-      compare_exchange_strong(__int_type& __i1, __int_type __i2,
+      compare_exchange_strong(value_type& __i1, value_type __i2,
 			      memory_order __m1, memory_order __m2) noexcept
       {
-	memory_order __b2 = __m2 & __memory_order_mask;
-	memory_order __b1 = __m1 & __memory_order_mask;
-	__glibcxx_assert(__b2 != memory_order_release);
-	__glibcxx_assert(__b2 != memory_order_acq_rel);
-	__glibcxx_assert(__b2 <= __b1);
-
-	return __atomic_compare_exchange_n(&_M_i, &__i1, __i2, 0,
-					   int(__m1), int(__m2));
+	return __atomic_impl::compare_exchange_strong(&_M_i, __i1, __i2,
+						      __m1, __m2);
       }
 
       _GLIBCXX_ALWAYS_INLINE bool
-      compare_exchange_strong(__int_type& __i1, __int_type __i2,
+      compare_exchange_strong(value_type& __i1, value_type __i2,
 			      memory_order __m1,
 			      memory_order __m2) volatile noexcept
       {
-	memory_order __b2 = __m2 & __memory_order_mask;
-	memory_order __b1 = __m1 & __memory_order_mask;
-
-	__glibcxx_assert(__b2 != memory_order_release);
-	__glibcxx_assert(__b2 != memory_order_acq_rel);
-	__glibcxx_assert(__b2 <= __b1);
-
-	return __atomic_compare_exchange_n(&_M_i, &__i1, __i2, 0,
-					   int(__m1), int(__m2));
+	return __atomic_impl::compare_exchange_strong(&_M_i, __i1, __i2,
+						      __m1, __m2);
       }
 
       _GLIBCXX_ALWAYS_INLINE bool
-      compare_exchange_strong(__int_type& __i1, __int_type __i2,
+      compare_exchange_strong(value_type& __i1, value_type __i2,
 			      memory_order __m = memory_order_seq_cst) noexcept
       {
 	return compare_exchange_strong(__i1, __i2, __m,
@@ -529,464 +663,67 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       }
 
       _GLIBCXX_ALWAYS_INLINE bool
-      compare_exchange_strong(__int_type& __i1, __int_type __i2,
+      compare_exchange_strong(value_type& __i1, value_type __i2,
 		 memory_order __m = memory_order_seq_cst) volatile noexcept
       {
 	return compare_exchange_strong(__i1, __i2, __m,
 				       __cmpexch_failure_order(__m));
       }
 
-      _GLIBCXX_ALWAYS_INLINE __int_type
-      fetch_add(__int_type __i,
+      _GLIBCXX_ALWAYS_INLINE value_type
+      fetch_add(value_type __i,
 		memory_order __m = memory_order_seq_cst) noexcept
-      { return __atomic_fetch_add(&_M_i, __i, int(__m)); }
+      { return __atomic_impl::fetch_add(&_M_i, __i, __m); }
 
-      _GLIBCXX_ALWAYS_INLINE __int_type
-      fetch_add(__int_type __i,
+      _GLIBCXX_ALWAYS_INLINE value_type
+      fetch_add(value_type __i,
 		memory_order __m = memory_order_seq_cst) volatile noexcept
-      { return __atomic_fetch_add(&_M_i, __i, int(__m)); }
+      { return __atomic_impl::fetch_add(&_M_i, __i, __m); }
 
-      _GLIBCXX_ALWAYS_INLINE __int_type
-      fetch_sub(__int_type __i,
+      _GLIBCXX_ALWAYS_INLINE value_type
+      fetch_sub(value_type __i,
 		memory_order __m = memory_order_seq_cst) noexcept
-      { return __atomic_fetch_sub(&_M_i, __i, int(__m)); }
+      { return __atomic_impl::fetch_sub(&_M_i, __i, __m); }
 
-      _GLIBCXX_ALWAYS_INLINE __int_type
-      fetch_sub(__int_type __i,
+      _GLIBCXX_ALWAYS_INLINE value_type
+      fetch_sub(value_type __i,
 		memory_order __m = memory_order_seq_cst) volatile noexcept
-      { return __atomic_fetch_sub(&_M_i, __i, int(__m)); }
+      { return __atomic_impl::fetch_sub(&_M_i, __i, __m); }
 
-      _GLIBCXX_ALWAYS_INLINE __int_type
-      fetch_and(__int_type __i,
+      _GLIBCXX_ALWAYS_INLINE value_type
+      fetch_and(value_type __i,
 		memory_order __m = memory_order_seq_cst) noexcept
-      { return __atomic_fetch_and(&_M_i, __i, int(__m)); }
+      { return __atomic_impl::fetch_and(&_M_i, __i, __m); }
 
-      _GLIBCXX_ALWAYS_INLINE __int_type
-      fetch_and(__int_type __i,
+      _GLIBCXX_ALWAYS_INLINE value_type
+      fetch_and(value_type __i,
 		memory_order __m = memory_order_seq_cst) volatile noexcept
-      { return __atomic_fetch_and(&_M_i, __i, int(__m)); }
+      { return __atomic_impl::fetch_and(&_M_i, __i, __m); }
 
-      _GLIBCXX_ALWAYS_INLINE __int_type
-      fetch_or(__int_type __i,
+      _GLIBCXX_ALWAYS_INLINE value_type
+      fetch_or(value_type __i,
 	       memory_order __m = memory_order_seq_cst) noexcept
-      { return __atomic_fetch_or(&_M_i, __i, int(__m)); }
+      { return __atomic_impl::fetch_or(&_M_i, __i, __m); }
 
-      _GLIBCXX_ALWAYS_INLINE __int_type
-      fetch_or(__int_type __i,
+      _GLIBCXX_ALWAYS_INLINE value_type
+      fetch_or(value_type __i,
 	       memory_order __m = memory_order_seq_cst) volatile noexcept
-      { return __atomic_fetch_or(&_M_i, __i, int(__m)); }
+      { return __atomic_impl::fetch_or(&_M_i, __i, __m); }
 
-      _GLIBCXX_ALWAYS_INLINE __int_type
-      fetch_xor(__int_type __i,
+      _GLIBCXX_ALWAYS_INLINE value_type
+      fetch_xor(value_type __i,
 		memory_order __m = memory_order_seq_cst) noexcept
-      { return __atomic_fetch_xor(&_M_i, __i, int(__m)); }
+      { return __atomic_impl::fetch_xor(&_M_i, __i, __m); }
 
-      _GLIBCXX_ALWAYS_INLINE __int_type
-      fetch_xor(__int_type __i,
+      _GLIBCXX_ALWAYS_INLINE value_type
+      fetch_xor(value_type __i,
 		memory_order __m = memory_order_seq_cst) volatile noexcept
-      { return __atomic_fetch_xor(&_M_i, __i, int(__m)); }
-    };
-
-
-  /// Partial specialization for pointer types.
-  template<typename _PTp>
-    struct __atomic_base<_PTp*>
-    {
-    private:
-      typedef _PTp* 	__pointer_type;
-
-      __pointer_type 	_M_p;
-
-      // Factored out to facilitate explicit specialization.
-      constexpr ptrdiff_t
-      _M_type_size(ptrdiff_t __d) const { return __d * sizeof(_PTp); }
-
-      constexpr ptrdiff_t
-      _M_type_size(ptrdiff_t __d) const volatile { return __d * sizeof(_PTp); }
-
-    public:
-      __atomic_base() noexcept = default;
-      ~__atomic_base() noexcept = default;
-      __atomic_base(const __atomic_base&) = delete;
-      __atomic_base& operator=(const __atomic_base&) = delete;
-      __atomic_base& operator=(const __atomic_base&) volatile = delete;
-
-      // Requires __pointer_type convertible to _M_p.
-      constexpr __atomic_base(__pointer_type __p) noexcept : _M_p (__p) { }
-
-      operator __pointer_type() const noexcept
-      { return load(); }
-
-      operator __pointer_type() const volatile noexcept
-      { return load(); }
-
-      __pointer_type
-      operator=(__pointer_type __p) noexcept
-      {
-	store(__p);
-	return __p;
-      }
-
-      __pointer_type
-      operator=(__pointer_type __p) volatile noexcept
-      {
-	store(__p);
-	return __p;
-      }
-
-      __pointer_type
-      operator++(int) noexcept
-      { return fetch_add(1); }
-
-      __pointer_type
-      operator++(int) volatile noexcept
-      { return fetch_add(1); }
-
-      __pointer_type
-      operator--(int) noexcept
-      { return fetch_sub(1); }
-
-      __pointer_type
-      operator--(int) volatile noexcept
-      { return fetch_sub(1); }
-
-      __pointer_type
-      operator++() noexcept
-      { return __atomic_add_fetch(&_M_p, _M_type_size(1),
-				  int(memory_order_seq_cst)); }
-
-      __pointer_type
-      operator++() volatile noexcept
-      { return __atomic_add_fetch(&_M_p, _M_type_size(1),
-				  int(memory_order_seq_cst)); }
-
-      __pointer_type
-      operator--() noexcept
-      { return __atomic_sub_fetch(&_M_p, _M_type_size(1),
-				  int(memory_order_seq_cst)); }
-
-      __pointer_type
-      operator--() volatile noexcept
-      { return __atomic_sub_fetch(&_M_p, _M_type_size(1),
-				  int(memory_order_seq_cst)); }
-
-      __pointer_type
-      operator+=(ptrdiff_t __d) noexcept
-      { return __atomic_add_fetch(&_M_p, _M_type_size(__d),
-				  int(memory_order_seq_cst)); }
-
-      __pointer_type
-      operator+=(ptrdiff_t __d) volatile noexcept
-      { return __atomic_add_fetch(&_M_p, _M_type_size(__d),
-				  int(memory_order_seq_cst)); }
-
-      __pointer_type
-      operator-=(ptrdiff_t __d) noexcept
-      { return __atomic_sub_fetch(&_M_p, _M_type_size(__d),
-				  int(memory_order_seq_cst)); }
-
-      __pointer_type
-      operator-=(ptrdiff_t __d) volatile noexcept
-      { return __atomic_sub_fetch(&_M_p, _M_type_size(__d),
-				  int(memory_order_seq_cst)); }
-
-      bool
-      is_lock_free() const noexcept
-      {
-	// Produce a fake, minimally aligned pointer.
-	return __atomic_is_lock_free(sizeof(_M_p),
-	    reinterpret_cast<void *>(-__alignof(_M_p)));
-      }
-
-      bool
-      is_lock_free() const volatile noexcept
-      {
-	// Produce a fake, minimally aligned pointer.
-	return __atomic_is_lock_free(sizeof(_M_p),
-	    reinterpret_cast<void *>(-__alignof(_M_p)));
-      }
-
-      _GLIBCXX_ALWAYS_INLINE void
-      store(__pointer_type __p,
-	    memory_order __m = memory_order_seq_cst) noexcept
-      {
-        memory_order __b = __m & __memory_order_mask;
-
-	__glibcxx_assert(__b != memory_order_acquire);
-	__glibcxx_assert(__b != memory_order_acq_rel);
-	__glibcxx_assert(__b != memory_order_consume);
-
-	__atomic_store_n(&_M_p, __p, int(__m));
-      }
-
-      _GLIBCXX_ALWAYS_INLINE void
-      store(__pointer_type __p,
-	    memory_order __m = memory_order_seq_cst) volatile noexcept
-      {
-	memory_order __b = __m & __memory_order_mask;
-	__glibcxx_assert(__b != memory_order_acquire);
-	__glibcxx_assert(__b != memory_order_acq_rel);
-	__glibcxx_assert(__b != memory_order_consume);
-
-	__atomic_store_n(&_M_p, __p, int(__m));
-      }
-
-      _GLIBCXX_ALWAYS_INLINE __pointer_type
-      load(memory_order __m = memory_order_seq_cst) const noexcept
-      {
-	memory_order __b = __m & __memory_order_mask;
-	__glibcxx_assert(__b != memory_order_release);
-	__glibcxx_assert(__b != memory_order_acq_rel);
-
-	return __atomic_load_n(&_M_p, int(__m));
-      }
-
-      _GLIBCXX_ALWAYS_INLINE __pointer_type
-      load(memory_order __m = memory_order_seq_cst) const volatile noexcept
-      {
-	memory_order __b = __m & __memory_order_mask;
-	__glibcxx_assert(__b != memory_order_release);
-	__glibcxx_assert(__b != memory_order_acq_rel);
-
-	return __atomic_load_n(&_M_p, int(__m));
-      }
-
-      _GLIBCXX_ALWAYS_INLINE __pointer_type
-      exchange(__pointer_type __p,
-	       memory_order __m = memory_order_seq_cst) noexcept
-      {
-	return __atomic_exchange_n(&_M_p, __p, int(__m));
-      }
-
-
-      _GLIBCXX_ALWAYS_INLINE __pointer_type
-      exchange(__pointer_type __p,
-	       memory_order __m = memory_order_seq_cst) volatile noexcept
-      {
-	return __atomic_exchange_n(&_M_p, __p, int(__m));
-      }
-
-      _GLIBCXX_ALWAYS_INLINE bool
-      compare_exchange_strong(__pointer_type& __p1, __pointer_type __p2,
-			      memory_order __m1,
-			      memory_order __m2) noexcept
-      {
-	memory_order __b2 = __m2 & __memory_order_mask;
-	memory_order __b1 = __m1 & __memory_order_mask;
-	__glibcxx_assert(__b2 != memory_order_release);
-	__glibcxx_assert(__b2 != memory_order_acq_rel);
-	__glibcxx_assert(__b2 <= __b1);
-
-	return __atomic_compare_exchange_n(&_M_p, &__p1, __p2, 0,
-					   int(__m1), int(__m2));
-      }
-
-      _GLIBCXX_ALWAYS_INLINE bool
-      compare_exchange_strong(__pointer_type& __p1, __pointer_type __p2,
-			      memory_order __m1,
-			      memory_order __m2) volatile noexcept
-      {
-	memory_order __b2 = __m2 & __memory_order_mask;
-	memory_order __b1 = __m1 & __memory_order_mask;
-
-	__glibcxx_assert(__b2 != memory_order_release);
-	__glibcxx_assert(__b2 != memory_order_acq_rel);
-	__glibcxx_assert(__b2 <= __b1);
-
-	return __atomic_compare_exchange_n(&_M_p, &__p1, __p2, 0,
-					   int(__m1), int(__m2));
-      }
-
-      _GLIBCXX_ALWAYS_INLINE __pointer_type
-      fetch_add(ptrdiff_t __d,
-		memory_order __m = memory_order_seq_cst) noexcept
-      { return __atomic_fetch_add(&_M_p, _M_type_size(__d), int(__m)); }
-
-      _GLIBCXX_ALWAYS_INLINE __pointer_type
-      fetch_add(ptrdiff_t __d,
-		memory_order __m = memory_order_seq_cst) volatile noexcept
-      { return __atomic_fetch_add(&_M_p, _M_type_size(__d), int(__m)); }
-
-      _GLIBCXX_ALWAYS_INLINE __pointer_type
-      fetch_sub(ptrdiff_t __d,
-		memory_order __m = memory_order_seq_cst) noexcept
-      { return __atomic_fetch_sub(&_M_p, _M_type_size(__d), int(__m)); }
-
-      _GLIBCXX_ALWAYS_INLINE __pointer_type
-      fetch_sub(ptrdiff_t __d,
-		memory_order __m = memory_order_seq_cst) volatile noexcept
-      { return __atomic_fetch_sub(&_M_p, _M_type_size(__d), int(__m)); }
+      { return __atomic_impl::fetch_xor(&_M_i, __i, __m); }
     };
 
 #if __cplusplus > 201703L
-  // Implementation details of atomic_ref and atomic<floating-point>.
-  namespace __atomic_impl
-  {
-    // Remove volatile and create a non-deduced context for value arguments.
-    template<typename _Tp>
-      using _Val = remove_volatile_t<_Tp>;
-
-    // As above, but for difference_type arguments.
-    template<typename _Tp>
-      using _Diff = conditional_t<is_pointer_v<_Tp>, ptrdiff_t, _Val<_Tp>>;
-
-    template<size_t _Size, size_t _Align>
-      _GLIBCXX_ALWAYS_INLINE bool
-      is_lock_free() noexcept
-      {
-	// Produce a fake, minimally aligned pointer.
-	return __atomic_is_lock_free(_Size, reinterpret_cast<void *>(-_Align));
-      }
-
-    template<typename _Tp>
-      _GLIBCXX_ALWAYS_INLINE void
-      store(_Tp* __ptr, _Val<_Tp> __t, memory_order __m) noexcept
-      { __atomic_store(__ptr, std::__addressof(__t), int(__m)); }
-
-    template<typename _Tp>
-      _GLIBCXX_ALWAYS_INLINE _Tp
-      load(_Tp* __ptr, memory_order __m) noexcept
-      {
-	alignas(_Tp) unsigned char __buf[sizeof(_Tp)];
-	_Tp* __dest = reinterpret_cast<_Tp*>(__buf);
-	__atomic_load(__ptr, __dest, int(__m));
-	return *__dest;
-      }
-
-    template<typename _Tp>
-      _GLIBCXX_ALWAYS_INLINE _Tp
-      exchange(_Tp* __ptr, _Val<_Tp> __desired, memory_order __m) noexcept
-      {
-        alignas(_Tp) unsigned char __buf[sizeof(_Tp)];
-	_Tp* __dest = reinterpret_cast<_Tp*>(__buf);
-	__atomic_exchange(__ptr, std::__addressof(__desired), __dest, int(__m));
-	return *__dest;
-      }
-
-    template<typename _Tp>
-      _GLIBCXX_ALWAYS_INLINE bool
-      compare_exchange_weak(_Tp* __ptr, _Val<_Tp>& __expected,
-			    _Val<_Tp> __desired, memory_order __success,
-			    memory_order __failure) noexcept
-      {
-	return __atomic_compare_exchange(__ptr, std::__addressof(__expected),
-					 std::__addressof(__desired), true,
-					 int(__success), int(__failure));
-      }
-
-    template<typename _Tp>
-      _GLIBCXX_ALWAYS_INLINE bool
-      compare_exchange_strong(_Tp* __ptr, _Val<_Tp>& __expected,
-			      _Val<_Tp> __desired, memory_order __success,
-			      memory_order __failure) noexcept
-      {
-	return __atomic_compare_exchange(__ptr, std::__addressof(__expected),
-					 std::__addressof(__desired), false,
-					 int(__success), int(__failure));
-      }
-
-    template<typename _Tp>
-      _GLIBCXX_ALWAYS_INLINE _Tp
-      fetch_add(_Tp* __ptr, _Diff<_Tp> __i, memory_order __m) noexcept
-      { return __atomic_fetch_add(__ptr, __i, int(__m)); }
-
-    template<typename _Tp>
-      _GLIBCXX_ALWAYS_INLINE _Tp
-      fetch_sub(_Tp* __ptr, _Diff<_Tp> __i, memory_order __m) noexcept
-      { return __atomic_fetch_sub(__ptr, __i, int(__m)); }
-
-    template<typename _Tp>
-      _GLIBCXX_ALWAYS_INLINE _Tp
-      fetch_and(_Tp* __ptr, _Val<_Tp> __i, memory_order __m) noexcept
-      { return __atomic_fetch_and(__ptr, __i, int(__m)); }
-
-    template<typename _Tp>
-      _GLIBCXX_ALWAYS_INLINE _Tp
-      fetch_or(_Tp* __ptr, _Val<_Tp> __i, memory_order __m) noexcept
-      { return __atomic_fetch_or(__ptr, __i, int(__m)); }
-
-    template<typename _Tp>
-      _GLIBCXX_ALWAYS_INLINE _Tp
-      fetch_xor(_Tp* __ptr, _Val<_Tp> __i, memory_order __m) noexcept
-      { return __atomic_fetch_xor(__ptr, __i, int(__m)); }
-
-    template<typename _Tp>
-      _GLIBCXX_ALWAYS_INLINE _Tp
-      __add_fetch(_Tp* __ptr, _Diff<_Tp> __i) noexcept
-      { return __atomic_add_fetch(__ptr, __i, __ATOMIC_SEQ_CST); }
-
-    template<typename _Tp>
-      _GLIBCXX_ALWAYS_INLINE _Tp
-      __sub_fetch(_Tp* __ptr, _Diff<_Tp> __i) noexcept
-      { return __atomic_sub_fetch(__ptr, __i, __ATOMIC_SEQ_CST); }
-
-    template<typename _Tp>
-      _GLIBCXX_ALWAYS_INLINE _Tp
-      __and_fetch(_Tp* __ptr, _Val<_Tp> __i) noexcept
-      { return __atomic_and_fetch(__ptr, __i, __ATOMIC_SEQ_CST); }
-
-    template<typename _Tp>
-      _GLIBCXX_ALWAYS_INLINE _Tp
-      __or_fetch(_Tp* __ptr, _Val<_Tp> __i) noexcept
-      { return __atomic_or_fetch(__ptr, __i, __ATOMIC_SEQ_CST); }
-
-    template<typename _Tp>
-      _GLIBCXX_ALWAYS_INLINE _Tp
-      __xor_fetch(_Tp* __ptr, _Val<_Tp> __i) noexcept
-      { return __atomic_xor_fetch(__ptr, __i, __ATOMIC_SEQ_CST); }
-
-    template<typename _Tp>
-      _Tp
-      __fetch_add_flt(_Tp* __ptr, _Val<_Tp> __i, memory_order __m) noexcept
-      {
-	_Val<_Tp> __oldval = load(__ptr, memory_order_relaxed);
-	_Val<_Tp> __newval = __oldval + __i;
-	while (!compare_exchange_weak(__ptr, __oldval, __newval, __m,
-				      memory_order_relaxed))
-	  __newval = __oldval + __i;
-	return __oldval;
-      }
-
-    template<typename _Tp>
-      _Tp
-      __fetch_sub_flt(_Tp* __ptr, _Val<_Tp> __i, memory_order __m) noexcept
-      {
-	_Val<_Tp> __oldval = load(__ptr, memory_order_relaxed);
-	_Val<_Tp> __newval = __oldval - __i;
-	while (!compare_exchange_weak(__ptr, __oldval, __newval, __m,
-				      memory_order_relaxed))
-	  __newval = __oldval - __i;
-	return __oldval;
-      }
-
-    template<typename _Tp>
-      _Tp
-      __add_fetch_flt(_Tp* __ptr, _Val<_Tp> __i) noexcept
-      {
-	_Val<_Tp> __oldval = load(__ptr, memory_order_relaxed);
-	_Val<_Tp> __newval = __oldval + __i;
-	while (!compare_exchange_weak(__ptr, __oldval, __newval,
-				      memory_order_seq_cst,
-				      memory_order_relaxed))
-	  __newval = __oldval + __i;
-	return __newval;
-      }
-
-    template<typename _Tp>
-      _Tp
-      __sub_fetch_flt(_Tp* __ptr, _Val<_Tp> __i) noexcept
-      {
-	_Val<_Tp> __oldval = load(__ptr, memory_order_relaxed);
-	_Val<_Tp> __newval = __oldval - __i;
-	while (!compare_exchange_weak(__ptr, __oldval, __newval,
-				      memory_order_seq_cst,
-				      memory_order_relaxed))
-	  __newval = __oldval - __i;
-	return __newval;
-      }
-  } // namespace __atomic_impl
-
-  // base class for atomic<floating-point-type>
+  // Base class for atomic<floating-point-type>.
+  // Implementation for atomic<float>, atomic<double>, atomic<long double>.
   template<typename _Fp>
     struct __atomic_float
     {
diff --git a/libstdc++-v3/include/std/atomic b/libstdc++-v3/include/std/atomic
index 26d8d3946da..686ecc9114e 100644
--- a/libstdc++-v3/include/std/atomic
+++ b/libstdc++-v3/include/std/atomic
@@ -218,19 +218,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 
       bool
       is_lock_free() const noexcept
-      {
-	// Produce a fake, minimally aligned pointer.
-	return __atomic_is_lock_free(sizeof(_M_i),
-	    reinterpret_cast<void *>(-_S_alignment));
-      }
+      { return __atomic_impl::is_lock_free<sizeof(_M_i), _S_alignment>(); }
 
       bool
       is_lock_free() const volatile noexcept
-      {
-	// Produce a fake, minimally aligned pointer.
-	return __atomic_is_lock_free(sizeof(_M_i),
-	    reinterpret_cast<void *>(-_S_alignment));
-      }
+      { return __atomic_impl::is_lock_free<sizeof(_M_i), _S_alignment>(); }
 
 #if __cplusplus >= 201703L
       static constexpr bool is_always_lock_free
@@ -239,69 +231,43 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 
       void
       store(_Tp __i, memory_order __m = memory_order_seq_cst) noexcept
-      { __atomic_store(std::__addressof(_M_i), std::__addressof(__i), int(__m)); }
+      { __atomic_impl::store(std::__addressof(_M_i), __i, __m); }
 
       void
       store(_Tp __i, memory_order __m = memory_order_seq_cst) volatile noexcept
-      { __atomic_store(std::__addressof(_M_i), std::__addressof(__i), int(__m)); }
+      { __atomic_impl::store(std::__addressof(_M_i), __i, __m); }
 
       _Tp
       load(memory_order __m = memory_order_seq_cst) const noexcept
-      {
-	alignas(_Tp) unsigned char __buf[sizeof(_Tp)];
-	_Tp* __ptr = reinterpret_cast<_Tp*>(__buf);
-	__atomic_load(std::__addressof(_M_i), __ptr, int(__m));
-	return *__ptr;
-      }
+      { return __atomic_impl::load(std::__addressof(_M_i), __m); }
 
       _Tp
       load(memory_order __m = memory_order_seq_cst) const volatile noexcept
-      {
-        alignas(_Tp) unsigned char __buf[sizeof(_Tp)];
-	_Tp* __ptr = reinterpret_cast<_Tp*>(__buf);
-	__atomic_load(std::__addressof(_M_i), __ptr, int(__m));
-	return *__ptr;
-      }
+      { return __atomic_impl::load(std::__addressof(_M_i), __m); }
 
       _Tp
       exchange(_Tp __i, memory_order __m = memory_order_seq_cst) noexcept
-      {
-        alignas(_Tp) unsigned char __buf[sizeof(_Tp)];
-	_Tp* __ptr = reinterpret_cast<_Tp*>(__buf);
-	__atomic_exchange(std::__addressof(_M_i), std::__addressof(__i),
-			  __ptr, int(__m));
-	return *__ptr;
-      }
+      { return __atomic_impl::exchange(std::__addressof(_M_i), __i, __m); }
 
       _Tp
       exchange(_Tp __i,
 	       memory_order __m = memory_order_seq_cst) volatile noexcept
-      {
-        alignas(_Tp) unsigned char __buf[sizeof(_Tp)];
-	_Tp* __ptr = reinterpret_cast<_Tp*>(__buf);
-	__atomic_exchange(std::__addressof(_M_i), std::__addressof(__i),
-			  __ptr, int(__m));
-	return *__ptr;
-      }
+      { return __atomic_impl::exchange(std::__addressof(_M_i), __i, __m); }
 
       bool
       compare_exchange_weak(_Tp& __e, _Tp __i, memory_order __s,
 			    memory_order __f) noexcept
       {
-	return __atomic_compare_exchange(std::__addressof(_M_i),
-					 std::__addressof(__e),
-					 std::__addressof(__i),
-					 true, int(__s), int(__f));
+	return __atomic_impl::compare_exchange_weak(std::__addressof(_M_i),
+						    __e, __i, __s, __f);
       }
 
       bool
       compare_exchange_weak(_Tp& __e, _Tp __i, memory_order __s,
 			    memory_order __f) volatile noexcept
       {
-	return __atomic_compare_exchange(std::__addressof(_M_i),
-					 std::__addressof(__e),
-					 std::__addressof(__i),
-					 true, int(__s), int(__f));
+	return __atomic_impl::compare_exchange_weak(std::__addressof(_M_i),
+						    __e, __i, __s, __f);
       }
 
       bool
@@ -320,20 +286,16 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       compare_exchange_strong(_Tp& __e, _Tp __i, memory_order __s,
 			      memory_order __f) noexcept
       {
-	return __atomic_compare_exchange(std::__addressof(_M_i),
-					 std::__addressof(__e),
-					 std::__addressof(__i),
-					 false, int(__s), int(__f));
+	return __atomic_impl::compare_exchange_strong(std::__addressof(_M_i),
+						      __e, __i, __s, __f);
       }
 
       bool
       compare_exchange_strong(_Tp& __e, _Tp __i, memory_order __s,
 			      memory_order __f) volatile noexcept
       {
-	return __atomic_compare_exchange(std::__addressof(_M_i),
-					 std::__addressof(__e),
-					 std::__addressof(__i),
-					 false, int(__s), int(__f));
+	return __atomic_impl::compare_exchange_strong(std::__addressof(_M_i),
+						      __e, __i, __s, __f);
       }
 
       bool
@@ -357,9 +319,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       using value_type = _Tp*;
       using difference_type = ptrdiff_t;
 
-      typedef _Tp* 			__pointer_type;
-      typedef __atomic_base<_Tp*>	__base_type;
-      __base_type			_M_b;
+      value_type			_M_p;
 
       atomic() noexcept = default;
       ~atomic() noexcept = default;
@@ -367,183 +327,207 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       atomic& operator=(const atomic&) = delete;
       atomic& operator=(const atomic&) volatile = delete;
 
-      constexpr atomic(__pointer_type __p) noexcept : _M_b(__p) { }
+      constexpr atomic(value_type __p) noexcept : _M_p(__p) { }
 
-      operator __pointer_type() const noexcept
-      { return __pointer_type(_M_b); }
+      operator value_type() const noexcept
+      { return load(); }
 
-      operator __pointer_type() const volatile noexcept
-      { return __pointer_type(_M_b); }
+      operator value_type() const volatile noexcept
+      { return load(); }
 
-      __pointer_type
-      operator=(__pointer_type __p) noexcept
-      { return _M_b.operator=(__p); }
+      value_type
+      operator=(value_type __p) noexcept
+      {
+	store(__p);
+	return __p;
+      }
 
-      __pointer_type
-      operator=(__pointer_type __p) volatile noexcept
-      { return _M_b.operator=(__p); }
+      value_type
+      operator=(value_type __p) volatile noexcept
+      {
+	store(__p);
+	return __p;
+      }
 
-      __pointer_type
+      _GLIBCXX_ALWAYS_INLINE value_type
       operator++(int) noexcept
       {
 #if __cplusplus >= 201703L
 	static_assert( is_object<_Tp>::value, "pointer to object type" );
 #endif
-	return _M_b++;
+	return fetch_add(1);
       }
 
-      __pointer_type
+      _GLIBCXX_ALWAYS_INLINE value_type
       operator++(int) volatile noexcept
       {
 #if __cplusplus >= 201703L
 	static_assert( is_object<_Tp>::value, "pointer to object type" );
 #endif
-	return _M_b++;
+	return fetch_add(1);
       }
 
-      __pointer_type
+      _GLIBCXX_ALWAYS_INLINE value_type
       operator--(int) noexcept
       {
 #if __cplusplus >= 201703L
 	static_assert( is_object<_Tp>::value, "pointer to object type" );
 #endif
-	return _M_b--;
+	return fetch_sub(1);
       }
 
-      __pointer_type
+      _GLIBCXX_ALWAYS_INLINE value_type
       operator--(int) volatile noexcept
       {
 #if __cplusplus >= 201703L
 	static_assert( is_object<_Tp>::value, "pointer to object type" );
 #endif
-	return _M_b--;
+	return fetch_sub(1);
       }
 
-      __pointer_type
+      value_type
       operator++() noexcept
       {
 #if __cplusplus >= 201703L
 	static_assert( is_object<_Tp>::value, "pointer to object type" );
 #endif
-	return ++_M_b;
+	return __atomic_impl::__add_fetch(std::__addressof(_M_p),
+					  _S_type_size(1));
       }
 
-      __pointer_type
+      value_type
       operator++() volatile noexcept
       {
 #if __cplusplus >= 201703L
 	static_assert( is_object<_Tp>::value, "pointer to object type" );
 #endif
-	return ++_M_b;
+	return __atomic_impl::__add_fetch(std::__addressof(_M_p),
+					  _S_type_size(1));
       }
 
-      __pointer_type
+      value_type
       operator--() noexcept
       {
 #if __cplusplus >= 201703L
 	static_assert( is_object<_Tp>::value, "pointer to object type" );
 #endif
-	return --_M_b;
+	return __atomic_impl::__sub_fetch(std::__addressof(_M_p),
+					  _S_type_size(1));
       }
 
-      __pointer_type
+      value_type
       operator--() volatile noexcept
       {
 #if __cplusplus >= 201703L
 	static_assert( is_object<_Tp>::value, "pointer to object type" );
 #endif
-	return --_M_b;
+	return __atomic_impl::__sub_fetch(std::__addressof(_M_p),
+					  _S_type_size(1));
       }
 
-      __pointer_type
+      value_type
       operator+=(ptrdiff_t __d) noexcept
       {
 #if __cplusplus >= 201703L
 	static_assert( is_object<_Tp>::value, "pointer to object type" );
 #endif
-	return _M_b.operator+=(__d);
+	return __atomic_impl::__add_fetch(std::__addressof(_M_p),
+					  _S_type_size(__d));
       }
 
-      __pointer_type
+      value_type
       operator+=(ptrdiff_t __d) volatile noexcept
       {
 #if __cplusplus >= 201703L
 	static_assert( is_object<_Tp>::value, "pointer to object type" );
 #endif
-	return _M_b.operator+=(__d);
+	return __atomic_impl::__add_fetch(std::__addressof(_M_p),
+					  _S_type_size(__d));
       }
 
-      __pointer_type
+      value_type
       operator-=(ptrdiff_t __d) noexcept
       {
 #if __cplusplus >= 201703L
 	static_assert( is_object<_Tp>::value, "pointer to object type" );
 #endif
-	return _M_b.operator-=(__d);
+	return __atomic_impl::__sub_fetch(std::__addressof(_M_p),
+					  _S_type_size(__d));
       }
 
-      __pointer_type
+      value_type
       operator-=(ptrdiff_t __d) volatile noexcept
       {
 #if __cplusplus >= 201703L
 	static_assert( is_object<_Tp>::value, "pointer to object type" );
 #endif
-	return _M_b.operator-=(__d);
+	return __atomic_impl::__sub_fetch(std::__addressof(_M_p),
+					  _S_type_size(__d));
       }
 
       bool
       is_lock_free() const noexcept
-      { return _M_b.is_lock_free(); }
+      {
+	return __atomic_impl::is_lock_free<sizeof(_M_p), __alignof__(_M_p)>();
+      }
 
       bool
       is_lock_free() const volatile noexcept
-      { return _M_b.is_lock_free(); }
+      {
+	return __atomic_impl::is_lock_free<sizeof(_M_p), __alignof__(_M_p)>();
+      }
 
 #if __cplusplus >= 201703L
     static constexpr bool is_always_lock_free = ATOMIC_POINTER_LOCK_FREE == 2;
 #endif
 
       void
-      store(__pointer_type __p,
+      store(value_type __p,
 	    memory_order __m = memory_order_seq_cst) noexcept
-      { return _M_b.store(__p, __m); }
+      { __atomic_impl::store(std::__addressof(_M_p), __p, __m); }
 
       void
-      store(__pointer_type __p,
+      store(value_type __p,
 	    memory_order __m = memory_order_seq_cst) volatile noexcept
-      { return _M_b.store(__p, __m); }
+      { __atomic_impl::store(std::__addressof(_M_p), __p, __m); }
 
-      __pointer_type
+      value_type
       load(memory_order __m = memory_order_seq_cst) const noexcept
-      { return _M_b.load(__m); }
+      { return __atomic_impl::load(std::__addressof(_M_p), __m); }
 
-      __pointer_type
+      value_type
       load(memory_order __m = memory_order_seq_cst) const volatile noexcept
-      { return _M_b.load(__m); }
+      { return __atomic_impl::load(std::__addressof(_M_p), __m); }
 
-      __pointer_type
-      exchange(__pointer_type __p,
+      value_type
+      exchange(value_type __p,
 	       memory_order __m = memory_order_seq_cst) noexcept
-      { return _M_b.exchange(__p, __m); }
+      { return __atomic_impl::exchange(std::__addressof(_M_p), __p, __m); }
 
-      __pointer_type
-      exchange(__pointer_type __p,
+      value_type
+      exchange(value_type __p,
 	       memory_order __m = memory_order_seq_cst) volatile noexcept
-      { return _M_b.exchange(__p, __m); }
+      { return __atomic_impl::exchange(std::__addressof(_M_p), __p, __m); }
 
       bool
-      compare_exchange_weak(__pointer_type& __p1, __pointer_type __p2,
+      compare_exchange_weak(value_type& __p1, value_type __p2,
 			    memory_order __m1, memory_order __m2) noexcept
-      { return _M_b.compare_exchange_strong(__p1, __p2, __m1, __m2); }
+      {
+	return __atomic_impl::compare_exchange_weak(std::__addressof(_M_p),
+						    __p1, __p2, __m1, __m2);
+      }
 
       bool
-      compare_exchange_weak(__pointer_type& __p1, __pointer_type __p2,
+      compare_exchange_weak(value_type& __p1, value_type __p2,
 			    memory_order __m1,
 			    memory_order __m2) volatile noexcept
-      { return _M_b.compare_exchange_strong(__p1, __p2, __m1, __m2); }
+      {
+	return __atomic_impl::compare_exchange_weak(std::__addressof(_M_p),
+						    __p1, __p2, __m1, __m2);
+      }
 
       bool
-      compare_exchange_weak(__pointer_type& __p1, __pointer_type __p2,
+      compare_exchange_weak(value_type& __p1, value_type __p2,
 			    memory_order __m = memory_order_seq_cst) noexcept
       {
 	return compare_exchange_weak(__p1, __p2, __m,
@@ -551,7 +535,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       }
 
       bool
-      compare_exchange_weak(__pointer_type& __p1, __pointer_type __p2,
+      compare_exchange_weak(value_type& __p1, value_type __p2,
 		    memory_order __m = memory_order_seq_cst) volatile noexcept
       {
 	return compare_exchange_weak(__p1, __p2, __m,
@@ -559,70 +543,78 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       }
 
       bool
-      compare_exchange_strong(__pointer_type& __p1, __pointer_type __p2,
+      compare_exchange_strong(value_type& __p1, value_type __p2,
 			      memory_order __m1, memory_order __m2) noexcept
-      { return _M_b.compare_exchange_strong(__p1, __p2, __m1, __m2); }
+      {
+	return __atomic_impl::compare_exchange_strong(std::__addressof(_M_p),
+						      __p1, __p2, __m1, __m2);
+      }
 
       bool
-      compare_exchange_strong(__pointer_type& __p1, __pointer_type __p2,
+      compare_exchange_strong(value_type& __p1, value_type __p2,
 			      memory_order __m1,
 			      memory_order __m2) volatile noexcept
-      { return _M_b.compare_exchange_strong(__p1, __p2, __m1, __m2); }
+      {
+	return __atomic_impl::compare_exchange_strong(std::__addressof(_M_p),
+						      __p1, __p2, __m1, __m2);
+      }
 
       bool
-      compare_exchange_strong(__pointer_type& __p1, __pointer_type __p2,
+      compare_exchange_strong(value_type& __p1, value_type __p2,
 			      memory_order __m = memory_order_seq_cst) noexcept
       {
-	return _M_b.compare_exchange_strong(__p1, __p2, __m,
-					    __cmpexch_failure_order(__m));
+	return compare_exchange_strong(__p1, __p2, __m,
+				       __cmpexch_failure_order(__m));
       }
 
       bool
-      compare_exchange_strong(__pointer_type& __p1, __pointer_type __p2,
+      compare_exchange_strong(value_type& __p1, value_type __p2,
 		    memory_order __m = memory_order_seq_cst) volatile noexcept
       {
-	return _M_b.compare_exchange_strong(__p1, __p2, __m,
-					    __cmpexch_failure_order(__m));
+	return compare_exchange_strong(__p1, __p2, __m,
+				       __cmpexch_failure_order(__m));
       }
 
-      __pointer_type
+      value_type
       fetch_add(ptrdiff_t __d,
 		memory_order __m = memory_order_seq_cst) noexcept
       {
-#if __cplusplus >= 201703L
-	static_assert( is_object<_Tp>::value, "pointer to object type" );
-#endif
-	return _M_b.fetch_add(__d, __m);
+	return __atomic_impl::fetch_add(std::__addressof(_M_p),
+					_S_type_size(__d), __m);
       }
 
-      __pointer_type
+      value_type
       fetch_add(ptrdiff_t __d,
 		memory_order __m = memory_order_seq_cst) volatile noexcept
       {
-#if __cplusplus >= 201703L
-	static_assert( is_object<_Tp>::value, "pointer to object type" );
-#endif
-	return _M_b.fetch_add(__d, __m);
+	return __atomic_impl::fetch_add(std::__addressof(_M_p),
+					_S_type_size(__d), __m);
       }
 
-      __pointer_type
+      value_type
       fetch_sub(ptrdiff_t __d,
 		memory_order __m = memory_order_seq_cst) noexcept
       {
-#if __cplusplus >= 201703L
-	static_assert( is_object<_Tp>::value, "pointer to object type" );
-#endif
-	return _M_b.fetch_sub(__d, __m);
+	return __atomic_impl::fetch_sub(std::__addressof(_M_p),
+					_S_type_size(__d), __m);
       }
 
-      __pointer_type
+      value_type
       fetch_sub(ptrdiff_t __d,
 		memory_order __m = memory_order_seq_cst) volatile noexcept
       {
+	return __atomic_impl::fetch_sub(std::__addressof(_M_p),
+					_S_type_size(__d), __m);
+      }
+
+    private:
+      static constexpr ptrdiff_t
+      _S_type_size(ptrdiff_t __d) noexcept
+      {
 #if __cplusplus >= 201703L
-	static_assert( is_object<_Tp>::value, "pointer to object type" );
+	static_assert(is_object_v<_Tp>);
 #endif
-	return _M_b.fetch_sub(__d, __m);
+	return __d * sizeof(_Tp);
       }
     };
 
@@ -631,8 +623,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
   template<>
     struct atomic<char> : __atomic_base<char>
     {
-      typedef char 			__integral_type;
-      typedef __atomic_base<char> 	__base_type;
+      using value_type = char;
 
       atomic() noexcept = default;
       ~atomic() noexcept = default;
@@ -640,10 +631,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       atomic& operator=(const atomic&) = delete;
       atomic& operator=(const atomic&) volatile = delete;
 
-      constexpr atomic(__integral_type __i) noexcept : __base_type(__i) { }
+      constexpr atomic(value_type __i) noexcept
+      : __atomic_base<value_type>(__i) { }
 
-      using __base_type::operator __integral_type;
-      using __base_type::operator=;
+      using __atomic_base<value_type>::operator value_type;
+      using __atomic_base<value_type>::operator=;
 
 #if __cplusplus >= 201703L
     static constexpr bool is_always_lock_free = ATOMIC_CHAR_LOCK_FREE == 2;
@@ -654,8 +646,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
   template<>
     struct atomic<signed char> : __atomic_base<signed char>
     {
-      typedef signed char 		__integral_type;
-      typedef __atomic_base<signed char> 	__base_type;
+      using value_type = signed char;
 
       atomic() noexcept= default;
       ~atomic() noexcept = default;
@@ -663,10 +654,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       atomic& operator=(const atomic&) = delete;
       atomic& operator=(const atomic&) volatile = delete;
 
-      constexpr atomic(__integral_type __i) noexcept : __base_type(__i) { }
+      constexpr atomic(value_type __i) noexcept
+      : __atomic_base<value_type>(__i) { }
 
-      using __base_type::operator __integral_type;
-      using __base_type::operator=;
+      using __atomic_base<value_type>::operator value_type;
+      using __atomic_base<value_type>::operator=;
 
 #if __cplusplus >= 201703L
     static constexpr bool is_always_lock_free = ATOMIC_CHAR_LOCK_FREE == 2;
@@ -677,8 +669,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
   template<>
     struct atomic<unsigned char> : __atomic_base<unsigned char>
     {
-      typedef unsigned char 		__integral_type;
-      typedef __atomic_base<unsigned char> 	__base_type;
+      using value_type = unsigned char;
 
       atomic() noexcept= default;
       ~atomic() noexcept = default;
@@ -686,10 +677,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       atomic& operator=(const atomic&) = delete;
       atomic& operator=(const atomic&) volatile = delete;
 
-      constexpr atomic(__integral_type __i) noexcept : __base_type(__i) { }
+      constexpr atomic(value_type __i) noexcept
+      : __atomic_base<value_type>(__i) { }
 
-      using __base_type::operator __integral_type;
-      using __base_type::operator=;
+      using __atomic_base<value_type>::operator value_type;
+      using __atomic_base<value_type>::operator=;
 
 #if __cplusplus >= 201703L
     static constexpr bool is_always_lock_free = ATOMIC_CHAR_LOCK_FREE == 2;
@@ -700,8 +692,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
   template<>
     struct atomic<short> : __atomic_base<short>
     {
-      typedef short 			__integral_type;
-      typedef __atomic_base<short> 		__base_type;
+      using value_type = short;
 
       atomic() noexcept = default;
       ~atomic() noexcept = default;
@@ -709,10 +700,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       atomic& operator=(const atomic&) = delete;
       atomic& operator=(const atomic&) volatile = delete;
 
-      constexpr atomic(__integral_type __i) noexcept : __base_type(__i) { }
+      constexpr atomic(value_type __i) noexcept
+      : __atomic_base<value_type>(__i) { }
 
-      using __base_type::operator __integral_type;
-      using __base_type::operator=;
+      using __atomic_base<value_type>::operator value_type;
+      using __atomic_base<value_type>::operator=;
 
 #if __cplusplus >= 201703L
     static constexpr bool is_always_lock_free = ATOMIC_SHORT_LOCK_FREE == 2;
@@ -723,8 +715,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
   template<>
     struct atomic<unsigned short> : __atomic_base<unsigned short>
     {
-      typedef unsigned short 	      	__integral_type;
-      typedef __atomic_base<unsigned short> 		__base_type;
+      using value_type = unsigned short;
 
       atomic() noexcept = default;
       ~atomic() noexcept = default;
@@ -732,10 +723,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       atomic& operator=(const atomic&) = delete;
       atomic& operator=(const atomic&) volatile = delete;
 
-      constexpr atomic(__integral_type __i) noexcept : __base_type(__i) { }
+      constexpr atomic(value_type __i) noexcept
+      : __atomic_base<value_type>(__i) { }
 
-      using __base_type::operator __integral_type;
-      using __base_type::operator=;
+      using __atomic_base<value_type>::operator value_type;
+      using __atomic_base<value_type>::operator=;
 
 #if __cplusplus >= 201703L
     static constexpr bool is_always_lock_free = ATOMIC_SHORT_LOCK_FREE == 2;
@@ -746,8 +738,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
   template<>
     struct atomic<int> : __atomic_base<int>
     {
-      typedef int 			__integral_type;
-      typedef __atomic_base<int> 		__base_type;
+      using value_type = int;
 
       atomic() noexcept = default;
       ~atomic() noexcept = default;
@@ -755,10 +746,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       atomic& operator=(const atomic&) = delete;
       atomic& operator=(const atomic&) volatile = delete;
 
-      constexpr atomic(__integral_type __i) noexcept : __base_type(__i) { }
+      constexpr atomic(value_type __i) noexcept
+      : __atomic_base<value_type>(__i) { }
 
-      using __base_type::operator __integral_type;
-      using __base_type::operator=;
+      using __atomic_base<value_type>::operator value_type;
+      using __atomic_base<value_type>::operator=;
 
 #if __cplusplus >= 201703L
     static constexpr bool is_always_lock_free = ATOMIC_INT_LOCK_FREE == 2;
@@ -769,8 +761,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
   template<>
     struct atomic<unsigned int> : __atomic_base<unsigned int>
     {
-      typedef unsigned int		__integral_type;
-      typedef __atomic_base<unsigned int> 	__base_type;
+      using value_type = unsigned int;
 
       atomic() noexcept = default;
       ~atomic() noexcept = default;
@@ -778,10 +769,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       atomic& operator=(const atomic&) = delete;
       atomic& operator=(const atomic&) volatile = delete;
 
-      constexpr atomic(__integral_type __i) noexcept : __base_type(__i) { }
+      constexpr atomic(value_type __i) noexcept
+      : __atomic_base<value_type>(__i) { }
 
-      using __base_type::operator __integral_type;
-      using __base_type::operator=;
+      using __atomic_base<value_type>::operator value_type;
+      using __atomic_base<value_type>::operator=;
 
 #if __cplusplus >= 201703L
     static constexpr bool is_always_lock_free = ATOMIC_INT_LOCK_FREE == 2;
@@ -792,8 +784,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
   template<>
     struct atomic<long> : __atomic_base<long>
     {
-      typedef long 			__integral_type;
-      typedef __atomic_base<long> 	__base_type;
+      using value_type = long;
 
       atomic() noexcept = default;
       ~atomic() noexcept = default;
@@ -801,10 +792,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       atomic& operator=(const atomic&) = delete;
       atomic& operator=(const atomic&) volatile = delete;
 
-      constexpr atomic(__integral_type __i) noexcept : __base_type(__i) { }
+      constexpr atomic(value_type __i) noexcept
+      : __atomic_base<value_type>(__i) { }
 
-      using __base_type::operator __integral_type;
-      using __base_type::operator=;
+      using __atomic_base<value_type>::operator value_type;
+      using __atomic_base<value_type>::operator=;
 
 #if __cplusplus >= 201703L
     static constexpr bool is_always_lock_free = ATOMIC_LONG_LOCK_FREE == 2;
@@ -815,8 +807,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
   template<>
     struct atomic<unsigned long> : __atomic_base<unsigned long>
     {
-      typedef unsigned long 		__integral_type;
-      typedef __atomic_base<unsigned long> 	__base_type;
+      using value_type = unsigned long;
 
       atomic() noexcept = default;
       ~atomic() noexcept = default;
@@ -824,10 +815,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       atomic& operator=(const atomic&) = delete;
       atomic& operator=(const atomic&) volatile = delete;
 
-      constexpr atomic(__integral_type __i) noexcept : __base_type(__i) { }
+      constexpr atomic(value_type __i) noexcept
+      : __atomic_base<value_type>(__i) { }
 
-      using __base_type::operator __integral_type;
-      using __base_type::operator=;
+      using __atomic_base<value_type>::operator value_type;
+      using __atomic_base<value_type>::operator=;
 
 #if __cplusplus >= 201703L
     static constexpr bool is_always_lock_free = ATOMIC_LONG_LOCK_FREE == 2;
@@ -838,8 +830,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
   template<>
     struct atomic<long long> : __atomic_base<long long>
     {
-      typedef long long 		__integral_type;
-      typedef __atomic_base<long long> 		__base_type;
+      using value_type = long long;
 
       atomic() noexcept = default;
       ~atomic() noexcept = default;
@@ -847,10 +838,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       atomic& operator=(const atomic&) = delete;
       atomic& operator=(const atomic&) volatile = delete;
 
-      constexpr atomic(__integral_type __i) noexcept : __base_type(__i) { }
+      constexpr atomic(value_type __i) noexcept
+      : __atomic_base<value_type>(__i) { }
 
-      using __base_type::operator __integral_type;
-      using __base_type::operator=;
+      using __atomic_base<value_type>::operator value_type;
+      using __atomic_base<value_type>::operator=;
 
 #if __cplusplus >= 201703L
     static constexpr bool is_always_lock_free = ATOMIC_LLONG_LOCK_FREE == 2;
@@ -861,8 +853,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
   template<>
     struct atomic<unsigned long long> : __atomic_base<unsigned long long>
     {
-      typedef unsigned long long       	__integral_type;
-      typedef __atomic_base<unsigned long long> 	__base_type;
+      using value_type = unsigned long long;
 
       atomic() noexcept = default;
       ~atomic() noexcept = default;
@@ -870,10 +861,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       atomic& operator=(const atomic&) = delete;
       atomic& operator=(const atomic&) volatile = delete;
 
-      constexpr atomic(__integral_type __i) noexcept : __base_type(__i) { }
+      constexpr atomic(value_type __i) noexcept
+      : __atomic_base<value_type>(__i) { }
 
-      using __base_type::operator __integral_type;
-      using __base_type::operator=;
+      using __atomic_base<value_type>::operator value_type;
+      using __atomic_base<value_type>::operator=;
 
 #if __cplusplus >= 201703L
     static constexpr bool is_always_lock_free = ATOMIC_LLONG_LOCK_FREE == 2;
@@ -884,8 +876,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
   template<>
     struct atomic<wchar_t> : __atomic_base<wchar_t>
     {
-      typedef wchar_t 			__integral_type;
-      typedef __atomic_base<wchar_t> 	__base_type;
+      using value_type = wchar_t;
 
       atomic() noexcept = default;
       ~atomic() noexcept = default;
@@ -893,10 +884,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       atomic& operator=(const atomic&) = delete;
       atomic& operator=(const atomic&) volatile = delete;
 
-      constexpr atomic(__integral_type __i) noexcept : __base_type(__i) { }
+      constexpr atomic(value_type __i) noexcept
+      : __atomic_base<value_type>(__i) { }
 
-      using __base_type::operator __integral_type;
-      using __base_type::operator=;
+      using __atomic_base<value_type>::operator value_type;
+      using __atomic_base<value_type>::operator=;
 
 #if __cplusplus >= 201703L
     static constexpr bool is_always_lock_free = ATOMIC_WCHAR_T_LOCK_FREE == 2;
@@ -908,8 +900,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
   template<>
     struct atomic<char8_t> : __atomic_base<char8_t>
     {
-      typedef char8_t 			__integral_type;
-      typedef __atomic_base<char8_t> 	__base_type;
+      using value_type = char8_t;
 
       atomic() noexcept = default;
       ~atomic() noexcept = default;
@@ -917,23 +908,23 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       atomic& operator=(const atomic&) = delete;
       atomic& operator=(const atomic&) volatile = delete;
 
-      constexpr atomic(__integral_type __i) noexcept : __base_type(__i) { }
+      constexpr atomic(value_type __i) noexcept
+      : __atomic_base<value_type>(__i) { }
 
-      using __base_type::operator __integral_type;
-      using __base_type::operator=;
+      using __atomic_base<value_type>::operator value_type;
+      using __atomic_base<value_type>::operator=;
 
 #if __cplusplus > 201402L
     static constexpr bool is_always_lock_free = ATOMIC_CHAR8_T_LOCK_FREE == 2;
 #endif
     };
-#endif
+#endif // char8_t
 
   /// Explicit specialization for char16_t.
   template<>
     struct atomic<char16_t> : __atomic_base<char16_t>
     {
-      typedef char16_t 			__integral_type;
-      typedef __atomic_base<char16_t> 	__base_type;
+      using value_type = char16_t;
 
       atomic() noexcept = default;
       ~atomic() noexcept = default;
@@ -941,10 +932,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       atomic& operator=(const atomic&) = delete;
       atomic& operator=(const atomic&) volatile = delete;
 
-      constexpr atomic(__integral_type __i) noexcept : __base_type(__i) { }
+      constexpr atomic(value_type __i) noexcept
+      : __atomic_base<value_type>(__i) { }
 
-      using __base_type::operator __integral_type;
-      using __base_type::operator=;
+      using __atomic_base<value_type>::operator value_type;
+      using __atomic_base<value_type>::operator=;
 
 #if __cplusplus >= 201703L
     static constexpr bool is_always_lock_free = ATOMIC_CHAR16_T_LOCK_FREE == 2;
@@ -955,8 +947,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
   template<>
     struct atomic<char32_t> : __atomic_base<char32_t>
     {
-      typedef char32_t 			__integral_type;
-      typedef __atomic_base<char32_t> 	__base_type;
+      using value_type = char32_t;
 
       atomic() noexcept = default;
       ~atomic() noexcept = default;
@@ -964,10 +955,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       atomic& operator=(const atomic&) = delete;
       atomic& operator=(const atomic&) volatile = delete;
 
-      constexpr atomic(__integral_type __i) noexcept : __base_type(__i) { }
+      constexpr atomic(value_type __i) noexcept
+      : __atomic_base<value_type>(__i) { }
 
-      using __base_type::operator __integral_type;
-      using __base_type::operator=;
+      using __atomic_base<value_type>::operator value_type;
+      using __atomic_base<value_type>::operator=;
 
 #if __cplusplus >= 201703L
     static constexpr bool is_always_lock_free = ATOMIC_CHAR32_T_LOCK_FREE == 2;
@@ -1337,9 +1329,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 						     memory_order_seq_cst);
     }
 
-  // Function templates for atomic_integral and atomic_pointer operations only.
-  // Some operations (and, or, xor) are only available for atomic integrals,
-  // which is implemented by taking a parameter of type __atomic_base<_ITp>*.
+  // Function templates for atomic<integral> and atomic<T*> operations only.
+  // These functions are ill-formed if called for a specialization that
+  // does not define the corresponding member function.
 
   template<typename _ITp>
     inline _ITp
@@ -1371,42 +1363,42 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 
   template<typename _ITp>
     inline _ITp
-    atomic_fetch_and_explicit(__atomic_base<_ITp>* __a,
+    atomic_fetch_and_explicit(atomic<_ITp>* __a,
 			      __atomic_val_t<_ITp> __i,
 			      memory_order __m) noexcept
     { return __a->fetch_and(__i, __m); }
 
   template<typename _ITp>
     inline _ITp
-    atomic_fetch_and_explicit(volatile __atomic_base<_ITp>* __a,
+    atomic_fetch_and_explicit(volatile atomic<_ITp>* __a,
 			      __atomic_val_t<_ITp> __i,
 			      memory_order __m) noexcept
     { return __a->fetch_and(__i, __m); }
 
   template<typename _ITp>
     inline _ITp
-    atomic_fetch_or_explicit(__atomic_base<_ITp>* __a,
+    atomic_fetch_or_explicit(atomic<_ITp>* __a,
 			     __atomic_val_t<_ITp> __i,
 			     memory_order __m) noexcept
     { return __a->fetch_or(__i, __m); }
 
   template<typename _ITp>
     inline _ITp
-    atomic_fetch_or_explicit(volatile __atomic_base<_ITp>* __a,
+    atomic_fetch_or_explicit(volatile atomic<_ITp>* __a,
 			     __atomic_val_t<_ITp> __i,
 			     memory_order __m) noexcept
     { return __a->fetch_or(__i, __m); }
 
   template<typename _ITp>
     inline _ITp
-    atomic_fetch_xor_explicit(__atomic_base<_ITp>* __a,
+    atomic_fetch_xor_explicit(atomic<_ITp>* __a,
 			      __atomic_val_t<_ITp> __i,
 			      memory_order __m) noexcept
     { return __a->fetch_xor(__i, __m); }
 
   template<typename _ITp>
     inline _ITp
-    atomic_fetch_xor_explicit(volatile __atomic_base<_ITp>* __a,
+    atomic_fetch_xor_explicit(volatile atomic<_ITp>* __a,
 			      __atomic_val_t<_ITp> __i,
 			      memory_order __m) noexcept
     { return __a->fetch_xor(__i, __m); }
@@ -1437,37 +1429,37 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 
   template<typename _ITp>
     inline _ITp
-    atomic_fetch_and(__atomic_base<_ITp>* __a,
+    atomic_fetch_and(atomic<_ITp>* __a,
 		     __atomic_val_t<_ITp> __i) noexcept
     { return atomic_fetch_and_explicit(__a, __i, memory_order_seq_cst); }
 
   template<typename _ITp>
     inline _ITp
-    atomic_fetch_and(volatile __atomic_base<_ITp>* __a,
+    atomic_fetch_and(volatile atomic<_ITp>* __a,
 		     __atomic_val_t<_ITp> __i) noexcept
     { return atomic_fetch_and_explicit(__a, __i, memory_order_seq_cst); }
 
   template<typename _ITp>
     inline _ITp
-    atomic_fetch_or(__atomic_base<_ITp>* __a,
+    atomic_fetch_or(atomic<_ITp>* __a,
 		    __atomic_val_t<_ITp> __i) noexcept
     { return atomic_fetch_or_explicit(__a, __i, memory_order_seq_cst); }
 
   template<typename _ITp>
     inline _ITp
-    atomic_fetch_or(volatile __atomic_base<_ITp>* __a,
+    atomic_fetch_or(volatile atomic<_ITp>* __a,
 		    __atomic_val_t<_ITp> __i) noexcept
     { return atomic_fetch_or_explicit(__a, __i, memory_order_seq_cst); }
 
   template<typename _ITp>
     inline _ITp
-    atomic_fetch_xor(__atomic_base<_ITp>* __a,
+    atomic_fetch_xor(atomic<_ITp>* __a,
 		     __atomic_val_t<_ITp> __i) noexcept
     { return atomic_fetch_xor_explicit(__a, __i, memory_order_seq_cst); }
 
   template<typename _ITp>
     inline _ITp
-    atomic_fetch_xor(volatile __atomic_base<_ITp>* __a,
+    atomic_fetch_xor(volatile atomic<_ITp>* __a,
 		     __atomic_val_t<_ITp> __i) noexcept
     { return atomic_fetch_xor_explicit(__a, __i, memory_order_seq_cst); }
 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] Define std::atomic_ref and std::atomic<floating-point> for C++20
  2019-07-11 19:45 [PATCH] Define std::atomic_ref and std::atomic<floating-point> for C++20 Jonathan Wakely
  2019-07-12  9:30 ` Jonathan Wakely
  2019-07-12 11:24 ` Jonathan Wakely
@ 2019-07-12 11:44 ` Jonathan Wakely
  2019-07-12 15:56 ` Jonathan Wakely
  3 siblings, 0 replies; 6+ messages in thread
From: Jonathan Wakely @ 2019-07-12 11:44 UTC (permalink / raw)
  To: libstdc++, gcc-patches

[-- Attachment #1: Type: text/plain, Size: 1461 bytes --]

On 11/07/19 20:45 +0100, Jonathan Wakely wrote:
>This adds the new atomic types from C++2a, as proposed by P0019 and
>P0020. To reduce duplication the calls to the compiler's atomic
>built-ins are wrapped in new functions in the __atomic_impl namespace.
>These functions are currently only used by std::atomic<floating-point>
>and std::atomic_ref but could also be used for all other specializations
>of std::atomic.
>
>	* include/bits/atomic_base.h (__atomic_impl): New namespace for
>	wrappers around atomic built-ins.
>	(__atomic_float, __atomic_ref): New class templates for use as base
>	classes.
>	* include/std/atomic (atomic<float>, atomic<double>)
>	(atomic<long double>): New explicit specializations.
>	(atomic_ref): New class template.
>	(__cpp_lib_atomic_ref): Define.
>	* include/std/version (__cpp_lib_atomic_ref): Define.
>	* testsuite/29_atomics/atomic/60695.cc: Adjust dg-error.
>   	* testsuite/29_atomics/atomic_float/1.cc: New test.
>   	* testsuite/29_atomics/atomic_float/requirements.cc: New test.
>   	* testsuite/29_atomics/atomic_ref/deduction.cc: New test.
>   	* testsuite/29_atomics/atomic_ref/float.cc: New test.
>   	* testsuite/29_atomics/atomic_ref/generic.cc: New test.
>   	* testsuite/29_atomics/atomic_ref/integral.cc: New test.
>   	* testsuite/29_atomics/atomic_ref/pointer.cc: New test.
>   	* testsuite/29_atomics/atomic_ref/requirements.cc: New test.
>

Here's the doc update for these features. Committed to trunk.



[-- Attachment #2: patch.txt --]
[-- Type: text/x-patch, Size: 1701 bytes --]

commit 6e6ad5a7338f902cfa1f8296eb11f6e0e6993a92
Author: redi <redi@138bc75d-0d04-0410-961f-82ee72b054a4>
Date:   Fri Jul 12 11:43:17 2019 +0000

    Update C++2a library status table
    
            * doc/xml/manual/status_cxx2020.xml: Update status for atomic_ref
            and floating point atomics.
    
    git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@273441 138bc75d-0d04-0410-961f-82ee72b054a4

diff --git a/libstdc++-v3/doc/xml/manual/status_cxx2020.xml b/libstdc++-v3/doc/xml/manual/status_cxx2020.xml
index 029c5bc9b8e..3a5de38ad22 100644
--- a/libstdc++-v3/doc/xml/manual/status_cxx2020.xml
+++ b/libstdc++-v3/doc/xml/manual/status_cxx2020.xml
@@ -78,14 +78,13 @@ Feature-testing recommendations for C++</link>.
     </row>
 
     <row>
-      <?dbhtml bgcolor="#C8B0B0" ?>
       <entry>  Floating Point Atomic </entry>
       <entry>
         <link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/p0020r6.html">
 	P0020R6
 	</link>
       </entry>
-      <entry align="center"> </entry>
+      <entry align="center"> 10.1 </entry>
       <entry />
     </row>
 
@@ -345,15 +344,14 @@ Feature-testing recommendations for C++</link>.
     </row>
 
     <row>
-      <?dbhtml bgcolor="#C8B0B0" ?>
       <entry>  Atomic Ref </entry>
       <entry>
         <link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0019r8.html">
 	P0019R8
 	</link>
       </entry>
-      <entry align="center"> </entry>
-      <entry />
+      <entry align="center"> 10.1 </entry>
+      <entry> <code>__cpp_lib_atomic_ref &gt;= 201806L</code> </entry>
     </row>
 
     <row>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] Define std::atomic_ref and std::atomic<floating-point> for C++20
  2019-07-12 11:24 ` Jonathan Wakely
@ 2019-07-12 12:11   ` Jonathan Wakely
  0 siblings, 0 replies; 6+ messages in thread
From: Jonathan Wakely @ 2019-07-12 12:11 UTC (permalink / raw)
  To: libstdc++, gcc-patches

[-- Attachment #1: Type: text/plain, Size: 815 bytes --]

On 12/07/19 12:20 +0100, Jonathan Wakely wrote:
>On 11/07/19 20:45 +0100, Jonathan Wakely wrote:
>>This adds the new atomic types from C++2a, as proposed by P0019 and
>>P0020. To reduce duplication the calls to the compiler's atomic
>>built-ins are wrapped in new functions in the __atomic_impl namespace.
>>These functions are currently only used by std::atomic<floating-point>
>>and std::atomic_ref but could also be used for all other specializations
>>of std::atomic.
>
>Here's a patch to reuse the new __atomic_impl functions in the
>existing atomic<integral> and atomic<pointer> specializations (and
>apply some general tidying up).
>
>I don't plan to commit this yet, but I might do so at some point.

And here's a patch for https://wg21.link/lwg3220 which I won't apply
until that open issue is resolved.



[-- Attachment #2: patch.txt --]
[-- Type: text/x-patch, Size: 6158 bytes --]

commit 654cef2273b3231dd9ab64261183f477a378d795
Author: Jonathan Wakely <jwakely@redhat.com>
Date:   Fri Jul 12 12:46:42 2019 +0100

    LWG 3220

diff --git a/libstdc++-v3/include/std/atomic b/libstdc++-v3/include/std/atomic
index 686ecc9114e..e1f1bbc488c 100644
--- a/libstdc++-v3/include/std/atomic
+++ b/libstdc++-v3/include/std/atomic
@@ -1183,13 +1183,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 
   template<typename _ITp>
     inline void
-    atomic_store_explicit(atomic<_ITp>* __a, __atomic_val_t<_ITp> __i,
+    atomic_store_explicit(atomic<_ITp>* __a, __type_identity_t<_ITp> __i,
 			  memory_order __m) noexcept
     { __a->store(__i, __m); }
 
   template<typename _ITp>
     inline void
-    atomic_store_explicit(volatile atomic<_ITp>* __a, __atomic_val_t<_ITp> __i,
+    atomic_store_explicit(volatile atomic<_ITp>* __a,
+			  __type_identity_t<_ITp> __i,
 			  memory_order __m) noexcept
     { __a->store(__i, __m); }
 
@@ -1206,22 +1207,22 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 
   template<typename _ITp>
     inline _ITp
-    atomic_exchange_explicit(atomic<_ITp>* __a, __atomic_val_t<_ITp> __i,
+    atomic_exchange_explicit(atomic<_ITp>* __a, __type_identity_t<_ITp> __i,
 			     memory_order __m) noexcept
     { return __a->exchange(__i, __m); }
 
   template<typename _ITp>
     inline _ITp
     atomic_exchange_explicit(volatile atomic<_ITp>* __a,
-			     __atomic_val_t<_ITp> __i,
+			     __type_identity_t<_ITp> __i,
 			     memory_order __m) noexcept
     { return __a->exchange(__i, __m); }
 
   template<typename _ITp>
     inline bool
     atomic_compare_exchange_weak_explicit(atomic<_ITp>* __a,
-					  __atomic_val_t<_ITp>* __i1,
-					  __atomic_val_t<_ITp> __i2,
+					  __type_identity_t<_ITp>* __i1,
+					  __type_identity_t<_ITp> __i2,
 					  memory_order __m1,
 					  memory_order __m2) noexcept
     { return __a->compare_exchange_weak(*__i1, __i2, __m1, __m2); }
@@ -1229,8 +1230,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
   template<typename _ITp>
     inline bool
     atomic_compare_exchange_weak_explicit(volatile atomic<_ITp>* __a,
-					  __atomic_val_t<_ITp>* __i1,
-					  __atomic_val_t<_ITp> __i2,
+					  __type_identity_t<_ITp>* __i1,
+					  __type_identity_t<_ITp> __i2,
 					  memory_order __m1,
 					  memory_order __m2) noexcept
     { return __a->compare_exchange_weak(*__i1, __i2, __m1, __m2); }
@@ -1238,8 +1239,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
   template<typename _ITp>
     inline bool
     atomic_compare_exchange_strong_explicit(atomic<_ITp>* __a,
-					    __atomic_val_t<_ITp>* __i1,
-					    __atomic_val_t<_ITp> __i2,
+					    __type_identity_t<_ITp>* __i1,
+					    __type_identity_t<_ITp> __i2,
 					    memory_order __m1,
 					    memory_order __m2) noexcept
     { return __a->compare_exchange_strong(*__i1, __i2, __m1, __m2); }
@@ -1247,8 +1248,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
   template<typename _ITp>
     inline bool
     atomic_compare_exchange_strong_explicit(volatile atomic<_ITp>* __a,
-					    __atomic_val_t<_ITp>* __i1,
-					    __atomic_val_t<_ITp> __i2,
+					    __type_identity_t<_ITp>* __i1,
+					    __type_identity_t<_ITp> __i2,
 					    memory_order __m1,
 					    memory_order __m2) noexcept
     { return __a->compare_exchange_strong(*__i1, __i2, __m1, __m2); }
@@ -1256,12 +1257,13 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 
   template<typename _ITp>
     inline void
-    atomic_store(atomic<_ITp>* __a, __atomic_val_t<_ITp> __i) noexcept
+    atomic_store(atomic<_ITp>* __a, __type_identity_t<_ITp> __i) noexcept
     { atomic_store_explicit(__a, __i, memory_order_seq_cst); }
 
   template<typename _ITp>
     inline void
-    atomic_store(volatile atomic<_ITp>* __a, __atomic_val_t<_ITp> __i) noexcept
+    atomic_store(volatile atomic<_ITp>* __a,
+		 __type_identity_t<_ITp> __i) noexcept
     { atomic_store_explicit(__a, __i, memory_order_seq_cst); }
 
   template<typename _ITp>
@@ -1276,20 +1278,20 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 
   template<typename _ITp>
     inline _ITp
-    atomic_exchange(atomic<_ITp>* __a, __atomic_val_t<_ITp> __i) noexcept
+    atomic_exchange(atomic<_ITp>* __a, __type_identity_t<_ITp> __i) noexcept
     { return atomic_exchange_explicit(__a, __i, memory_order_seq_cst); }
 
   template<typename _ITp>
     inline _ITp
     atomic_exchange(volatile atomic<_ITp>* __a,
-		    __atomic_val_t<_ITp> __i) noexcept
+		    __type_identity_t<_ITp> __i) noexcept
     { return atomic_exchange_explicit(__a, __i, memory_order_seq_cst); }
 
   template<typename _ITp>
     inline bool
     atomic_compare_exchange_weak(atomic<_ITp>* __a,
-				 __atomic_val_t<_ITp>* __i1,
-				 __atomic_val_t<_ITp> __i2) noexcept
+				 __type_identity_t<_ITp>* __i1,
+				 __type_identity_t<_ITp> __i2) noexcept
     {
       return atomic_compare_exchange_weak_explicit(__a, __i1, __i2,
 						   memory_order_seq_cst,
@@ -1299,8 +1301,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
   template<typename _ITp>
     inline bool
     atomic_compare_exchange_weak(volatile atomic<_ITp>* __a,
-				 __atomic_val_t<_ITp>* __i1,
-				 __atomic_val_t<_ITp> __i2) noexcept
+				 __type_identity_t<_ITp>* __i1,
+				 __type_identity_t<_ITp> __i2) noexcept
     {
       return atomic_compare_exchange_weak_explicit(__a, __i1, __i2,
 						   memory_order_seq_cst,
@@ -1310,8 +1312,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
   template<typename _ITp>
     inline bool
     atomic_compare_exchange_strong(atomic<_ITp>* __a,
-				   __atomic_val_t<_ITp>* __i1,
-				   __atomic_val_t<_ITp> __i2) noexcept
+				   __type_identity_t<_ITp>* __i1,
+				   __type_identity_t<_ITp> __i2) noexcept
     {
       return atomic_compare_exchange_strong_explicit(__a, __i1, __i2,
 						     memory_order_seq_cst,
@@ -1321,8 +1323,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
   template<typename _ITp>
     inline bool
     atomic_compare_exchange_strong(volatile atomic<_ITp>* __a,
-				   __atomic_val_t<_ITp>* __i1,
-				   __atomic_val_t<_ITp> __i2) noexcept
+				   __type_identity_t<_ITp>* __i1,
+				   __type_identity_t<_ITp> __i2) noexcept
     {
       return atomic_compare_exchange_strong_explicit(__a, __i1, __i2,
 						     memory_order_seq_cst,

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] Define std::atomic_ref and std::atomic<floating-point> for C++20
  2019-07-11 19:45 [PATCH] Define std::atomic_ref and std::atomic<floating-point> for C++20 Jonathan Wakely
                   ` (2 preceding siblings ...)
  2019-07-12 11:44 ` Jonathan Wakely
@ 2019-07-12 15:56 ` Jonathan Wakely
  3 siblings, 0 replies; 6+ messages in thread
From: Jonathan Wakely @ 2019-07-12 15:56 UTC (permalink / raw)
  To: libstdc++, gcc-patches

[-- Attachment #1: Type: text/plain, Size: 255 bytes --]

On 11/07/19 20:45 +0100, Jonathan Wakely wrote:
>+  // Repeat for volatile std::atomic<double>
>+  if constexpr (std::atomic<long double>::is_always_lock_free)

Thanks to Uros for pointing out this typo. Fixed by the attached
patch, committed to trunk.



[-- Attachment #2: patch.txt --]
[-- Type: text/x-patch, Size: 904 bytes --]

commit 991bdaf97870e2775a15314823759aec7fd79599
Author: redi <redi@138bc75d-0d04-0410-961f-82ee72b054a4>
Date:   Fri Jul 12 15:45:16 2019 +0000

    Fix inaccurate comment in new test
    
            * testsuite/29_atomics/atomic_float/1.cc: Fix comment.
    
    git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@273448 138bc75d-0d04-0410-961f-82ee72b054a4

diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_float/1.cc b/libstdc++-v3/testsuite/29_atomics/atomic_float/1.cc
index bd0e353538d..b56c026fb99 100644
--- a/libstdc++-v3/testsuite/29_atomics/atomic_float/1.cc
+++ b/libstdc++-v3/testsuite/29_atomics/atomic_float/1.cc
@@ -476,7 +476,7 @@ test03()
     VERIFY( a0 == 13.2l );
   }
 
-  // Repeat for volatile std::atomic<double>
+  // Repeat for volatile std::atomic<long double>
   if constexpr (std::atomic<long double>::is_always_lock_free)
   {
     volatile std::atomic<long double> a0;

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-07-12 15:46 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-11 19:45 [PATCH] Define std::atomic_ref and std::atomic<floating-point> for C++20 Jonathan Wakely
2019-07-12  9:30 ` Jonathan Wakely
2019-07-12 11:24 ` Jonathan Wakely
2019-07-12 12:11   ` Jonathan Wakely
2019-07-12 11:44 ` Jonathan Wakely
2019-07-12 15:56 ` Jonathan Wakely

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).