public inbox for libstdc++@gcc.gnu.org
 help / color / mirror / Atom feed
From: "François Dumont" <frs.dumont@gmail.com>
To: Jonathan Wakely <jwakely@redhat.com>
Cc: "libstdc++@gcc.gnu.org" <libstdc++@gcc.gnu.org>
Subject: Re: libstdc++ PR 57272 Fancy pointer support in Hashtable
Date: Sun, 1 Nov 2020 22:48:16 +0100	[thread overview]
Message-ID: <528bdf1d-ed96-5d9d-89db-e1bf57268e24@gmail.com> (raw)
In-Reply-To: <20201020110422.GA926136@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 6803 bytes --]

Here is an other attempt.

This time I am storing the node using allocator pointer just in the 
singly linked list of nodes. Buckets are still __node_base* so that the 
custom pointer is not manipulated too much. Moreover iterators are also 
using node raw pointers.

As advised I introduced new types in case of custom pointers. But I am 
also using those in gnu versioned namespace so that I could more easily 
test this new code with all existing test cases.

Note that as we are at introducing a new abi I am also changing node 
memory layout in this case. I think it is better to put the hash code 
cache before the node value rather than after. It will be closer to the 
begining of the node and so accessible without mem page fault.

To be clear the node mem layout is:
- next node pointer
- node value_type
- hash code (optional)

The new node mem layout is:
- next node pointer
- hash code (optional)
- node value_type

Here is the git log in case you validate it.

     libstdc++: Store allocator::pointer in hashtable implementation

     Use allocator pointer type in _Hashtable implementation.

             * include/bits/hashtable_policy.h
             (_ReuseOrAllocNode<>::__node_type): Remove.
             (_ReuseOrAllocNode<>::__node_pointer): New.
             (_ReuseOrAllocNode(__node_pointer, __hashtable_alloc&)): 
Adapt to use
             latter.
(_ReuseOrAllocNode<>::operator()(_Arg&&)): Return latter.
             (_AllocNode<>::__node_type): Remove.
             (_AllocNode<>::__node_pointer): New.
(_AllocNode<>::operator()<>(_Arg&&)): Return latter.
             (_Hash_node_cust_ptr_base<>): New.
             (_Hash_node_cache_value<typename _Value, bool 
_Cache_hash_code>): New.
             (_Hash_node<>::__node_base): New.
             (_Hash_node<>::__node_ptr): New.
             (_Hash_node<>::__node_type): New.
             (_Hash_node<>::__node_value_cache_type): New.
             (_Hash_node<>::_M_next_ptr()): New.
             (_Hash_cust_ptr_node<typename _Ptr, bool 
_Cache_hash_code>): New.
             (_Hashtable_iterator_base<typename _NodeType>): New.
             (_Node_iterator_base<>): Inherits from latter.
             (_Hashtable_iterator<typename _NodeType, bool 
__constant_iterators>):
             New.
             (_Hashtable_const_iterator<typename _NodeType, bool 
__constant_iterators>):
             New.
             (_Insert_base<>::__alloc_ptr): New.
             (_Insert_base<>::__node_type): New. Define conditionally to 
_Hash_node<>
             or _Hash_cust_ptr_node<> depending on __alloc_ptr being a 
raw pointer.
             (_Insert_base<>::__node_alloc_type): New.
             (_Insert_base<>::__hashtable_alloc): Remove.
             (_Insert_base<>::iterator): Define conditionally to 
_Node_iterator<>
             or _Hashtable_iterator<> depending on __alloc_ptr being a 
raw pointer.
             (_Insert_base<>::const_iterator): Define conditionally to
             _Node_const_iterator<> or _Hashtable_const_iterator<> 
depending on
             __alloc_ptr being a raw pointer.
             (_Hashtable_local_iter_base<>): New.
             (_Hash_code_base<>::_M_bucket_index(const 
_Hash_node_cache_value<>&,
             size_t)): New.
             (_Hashtable_local_iter_base<>): New.
             (_Hashtable_local_iterator<>): New.
             (_Hashtable_const_local_iterator<>): New.
             (_Hashtable_base<>::_M_equals(const _Key&, __hash_code,
             const _Hash_node_cache_value<>&): New.
             (_Hashtable_base<>::_M_node_equals(const 
_Hash_node_cache_value<>&,
             const _Hash_node_cache_value<>&)): New.
             (_Hashtable_alloc<>::__value_alloc_traits): Remove.
             (_Hashtable_alloc<>::__node_base_ptr): Remove.
             * include/bits/hashtable.h (_Hashtable<>): Adapt.
             * 
testsuite/23_containers/unordered_map/allocator/ext_ptr.cc: New test.
             * 
testsuite/23_containers/unordered_multimap/allocator/ext_ptr.cc:
             New test.
             * 
testsuite/23_containers/unordered_multiset/allocator/ext_ptr.cc:
             New test.
             * 
testsuite/23_containers/unordered_set/allocator/ext_ptr.cc: Adapt.

Tested under Linux x86_64 normal and version namespace modes.

François


On 20/10/20 1:04 pm, Jonathan Wakely wrote:
> On 28/09/20 22:37 +0200, François Dumont via Libstdc++ wrote:
>> Following recent changes on _Hashtable I rebase the patch and 
>> completely review it.
>>
>> I managed to integrate the allocator custom pointer type without 
>> touching to _Hashtable base types like _Hash_code_base or 
>> _Hashtable_base. However I cannot see how to use the custom pointer 
>> type without impacting the node types like _Hash_node_base which now 
>> takes a template parameter, the custom pointer type.
>>
>> On an abi point of view node types are different however the data 
>> structure is the same. The only difference is that the 
>> _Hash_node_base _M_nxt is now a _Hash_node<> custom pointer rather 
>> than a simple _Hash_node_base*.
>>
>> Even if this patch can't go in because of the abi breaking change I 
>> am going to adapt some of the code simplifications for master. 
>> Especially the _Hash_code_base and _Local_iterator_base simplifications.
>>
>> Let me know if you can think of a way to integrate the custom pointer 
>> without impacting abi. Unless impacting node types and associated 
>> iterator types is fine even if I already noticed that pretty printer 
>> tests are broken with those changes.
>
> The approach I used for the other containers (which was never
> completed and committed) is something like:
>
> struct _Node_base
> {
>   _Node_base* _M_next;
> };
>
> template<typename _Ptr>
> struct _Fancy_node_base
> {
>   _Ptr _M_next;
> };
>
> template<typename _Ptr>
>   using node_base = conditional_t<is_pointer<_Ptr>::value,
>                                   _Node_base,
> _Fancy_node_base<_Ptr>>;
>
> This way all existing code that has allocators with non-fancy pointers
> continues to use the same type. Code using fancy pointers (which
> doesn't currently work properly anyway) changes to use the new types
> that depend on the pointer type.
>


[-- Attachment #2: hashtable.patch --]
[-- Type: text/x-patch, Size: 67403 bytes --]

diff --git a/libstdc++-v3/include/bits/hashtable.h b/libstdc++-v3/include/bits/hashtable.h
index 6c6c5edde0b..86644d447ca 100644
--- a/libstdc++-v3/include/bits/hashtable.h
+++ b/libstdc++-v3/include/bits/hashtable.h
@@ -182,8 +182,19 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 				 _RehashPolicy, _Traits>,
       private __detail::_Hashtable_alloc<
 	__alloc_rebind<_Alloc,
-		       __detail::_Hash_node<_Value,
-					    _Traits::__hash_cached::value>>>
+#if _GLIBCXX_INLINE_VERSION
+		       __detail::_Hash_cust_ptr_node<
+			 __detail::__alloc_val_ptr<_Alloc, _Value>,
+			 _Traits::__hash_cached::value>>>
+#else
+	  typename std::conditional<
+	    std::__is_pointer<
+	      __detail::__alloc_val_ptr<_Alloc, _Value>>::__value,
+	    __detail::_Hash_node<_Value, _Traits::__hash_cached::value>,
+	    __detail::_Hash_cust_ptr_node<
+	      __detail::__alloc_val_ptr<_Alloc, _Value>,
+	      _Traits::__hash_cached::value>>::type>>
+#endif
     {
       static_assert(is_same<typename remove_cv<_Value>::type, _Value>::value,
 	  "unordered container must have a non-const, non-volatile value_type");
@@ -195,21 +206,30 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       using __traits_type = _Traits;
       using __hash_cached = typename __traits_type::__hash_cached;
       using __constant_iterators = typename __traits_type::__constant_iterators;
-      using __node_type = __detail::_Hash_node<_Value, __hash_cached::value>;
+      using __alloc_ptr = __detail::__alloc_val_ptr<_Alloc, _Value>;
+      using __node_type =
+#if _GLIBCXX_INLINE_VERSION
+	__detail::_Hash_cust_ptr_node<__alloc_ptr,
+				      _Traits::__hash_cached::value>;
+#else
+	typename std::conditional<
+	  std::__is_pointer<__alloc_ptr>::__value,
+	    __detail::_Hash_node<_Value, _Traits::__hash_cached::value>,
+	    __detail::_Hash_cust_ptr_node<__alloc_ptr,
+					  _Traits::__hash_cached::value>>::type;
+#endif
+      using __node_base = typename __node_type::__node_base;
+      using __node_value_type = typename __node_type::__node_value_cache_type;
       using __node_alloc_type = __alloc_rebind<_Alloc, __node_type>;
-
       using __hashtable_alloc = __detail::_Hashtable_alloc<__node_alloc_type>;
 
-      using __node_value_type =
-	__detail::_Hash_node_value<_Value, __hash_cached::value>;
       using __node_ptr = typename __hashtable_alloc::__node_ptr;
-      using __value_alloc_traits =
-	typename __hashtable_alloc::__value_alloc_traits;
       using __node_alloc_traits =
 	typename __hashtable_alloc::__node_alloc_traits;
-      using __node_base = typename __hashtable_alloc::__node_base;
-      using __node_base_ptr = typename __hashtable_alloc::__node_base_ptr;
+      using __value_alloc_traits =
+	typename __node_alloc_traits::template rebind_traits<_Value>;
       using __buckets_ptr = typename __hashtable_alloc::__buckets_ptr;
+      using __buckets_ptr_traits = std::pointer_traits<__buckets_ptr>;
 
       using __insert_base = __detail::_Insert<_Key, _Value, _Alloc, _ExtractKey,
 					      _Equal, _Hash,
@@ -233,15 +253,47 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 
       using const_iterator = typename __insert_base::const_iterator;
 
-      using local_iterator = __detail::_Local_iterator<key_type, _Value,
-			_ExtractKey, _Hash, _RangeHash, _Unused,
-					     __constant_iterators::value,
-					     __hash_cached::value>;
+      using local_iterator =
+#if _GLIBCXX_INLINE_VERSION
+	__detail::_Hashtable_local_iterator<key_type, __node_type,
+					    _ExtractKey, _Hash,
+					    _RangeHash, _Unused,
+					    __constant_iterators::value,
+					    __hash_cached::value>;
+#else
+	typename std::conditional<
+	  std::__is_pointer<__alloc_ptr>::__value,
+	__detail::_Local_iterator<key_type, _Value,
+				  _ExtractKey, _Hash, _RangeHash, _Unused,
+				  __constant_iterators::value,
+				  __hash_cached::value>,
+	__detail::_Hashtable_local_iterator<key_type, __node_type,
+					    _ExtractKey, _Hash,
+					    _RangeHash, _Unused,
+					    __constant_iterators::value,
+					    __hash_cached::value>>::type;
+#endif
 
-      using const_local_iterator = __detail::_Local_const_iterator<
-			key_type, _Value,
-			_ExtractKey, _Hash, _RangeHash, _Unused,
-			__constant_iterators::value, __hash_cached::value>;
+      using const_local_iterator =
+#if _GLIBCXX_INLINE_VERSION
+	__detail::_Hashtable_const_local_iterator<key_type, __node_type,
+						  _ExtractKey, _Hash,
+						  _RangeHash, _Unused,
+						  __constant_iterators::value,
+						  __hash_cached::value>;
+#else
+	typename std::conditional<
+	std::__is_pointer<__alloc_ptr>::__value,
+	__detail::_Local_const_iterator<key_type, _Value,
+				  _ExtractKey, _Hash, _RangeHash, _Unused,
+				  __constant_iterators::value,
+				  __hash_cached::value>,
+	__detail::_Hashtable_const_local_iterator<key_type, __node_type,
+						  _ExtractKey, _Hash,
+						  _RangeHash, _Unused,
+						  __constant_iterators::value,
+						  __hash_cached::value>>::type;
+#endif
 
     private:
       using __rehash_type = _RehashPolicy;
@@ -374,7 +426,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 #endif
 
     private:
-      __buckets_ptr		_M_buckets		= &_M_single_bucket;
+      __buckets_ptr		_M_buckets =
+			__buckets_ptr_traits::pointer_to(_M_single_bucket);
       size_type			_M_bucket_count		= 1;
       __node_base		_M_before_begin;
       size_type			_M_element_count	= 0;
@@ -386,13 +439,13 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       // qualified.
       // Note that we can't leave hashtable with 0 bucket without adding
       // numerous checks in the code to avoid 0 modulus.
-      __node_base_ptr		_M_single_bucket	= nullptr;
+      __node_base*		_M_single_bucket	= nullptr;
 
       void
       _M_update_bbegin()
       {
-	if (_M_begin())
-	  _M_buckets[_M_bucket_index(*_M_begin())] = &_M_before_begin;
+	if (auto __begin = _M_begin())
+	  _M_buckets[_M_bucket_index(*__begin)] = &_M_before_begin;
       }
 
       void
@@ -402,9 +455,18 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	_M_update_bbegin();
       }
 
+#if !_GLIBCXX_INLINE_VERSION
+      void
+      _M_update_bbegin(__detail::_Hash_node_base* __n)
+      { _M_update_bbegin(static_cast<__node_ptr>(__n));  }
+#endif
+
       bool
       _M_uses_single_bucket(__buckets_ptr __bkts) const
-      { return __builtin_expect(__bkts == &_M_single_bucket, false); }
+      {
+	return __builtin_expect(
+	  std::__to_address(__bkts) == &_M_single_bucket, false);
+      }
 
       bool
       _M_uses_single_bucket() const
@@ -419,7 +481,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	if (__builtin_expect(__bkt_count == 1, false))
 	  {
 	    _M_single_bucket = nullptr;
-	    return &_M_single_bucket;
+	    return __buckets_ptr_traits::pointer_to(_M_single_bucket);
 	  }
 
 	return __hashtable_alloc::_M_allocate_buckets(__bkt_count);
@@ -440,12 +502,15 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 
       // Gets bucket begin, deals with the fact that non-empty buckets contain
       // their before begin node.
-      __node_ptr
+      __node_type*
       _M_bucket_begin(size_type __bkt) const;
 
-      __node_ptr
+      __node_type*
       _M_begin() const
-      { return static_cast<__node_ptr>(_M_before_begin._M_nxt); }
+      {
+	return
+	  static_cast<__node_type*>(std::__to_address(_M_before_begin._M_nxt));
+      }
 
       // Assign *this using another _Hashtable instance. Whether elements
       // are copied or moved depends on the _Ht reference.
@@ -492,6 +557,16 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 		   const _Hash&, const _Equal&, const allocator_type&,
 		   false_type __uks);
 
+      static __node_ptr
+      _S_cast(__node_ptr __n)
+      { return __n; }
+
+#if !_GLIBCXX_INLINE_VERSION
+      static __node_ptr
+	_S_cast(__detail::_Hash_node_base* __n)
+      { return static_cast<__node_ptr>(__n); }
+#endif
+
     public:
       // Constructor, destructor, assignment, swap
       _Hashtable() = default;
@@ -568,7 +643,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       _Hashtable&
       operator=(initializer_list<value_type> __l)
       {
-	__reuse_or_alloc_node_gen_t __roan(_M_begin(), *this);
+	__reuse_or_alloc_node_gen_t __roan(_M_before_begin._M_nxt, *this);
 	_M_before_begin._M_nxt = nullptr;
 	clear();
 
@@ -736,16 +811,17 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 
       // Find and insert helper functions and types
       // Find the node before the one matching the criteria.
-      __node_base_ptr
+      __node_base*
       _M_find_before_node(size_type, const key_type&, __hash_code) const;
 
-      __node_ptr
+      __node_type*
       _M_find_node(size_type __bkt, const key_type& __key,
 		   __hash_code __c) const
       {
-	__node_base_ptr __before_n = _M_find_before_node(__bkt, __key, __c);
+	__node_base* __before_n = _M_find_before_node(__bkt, __key, __c);
 	if (__before_n)
-	  return static_cast<__node_ptr>(__before_n->_M_nxt);
+	  return static_cast<__node_type*>(
+				std::__to_address(__before_n->_M_nxt));
 	return nullptr;
       }
 
@@ -759,8 +835,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 			     size_type __next_bkt);
 
       // Get the node before __n in the bucket __bkt
-      __node_base_ptr
-      _M_get_previous_node(size_type __bkt, __node_ptr __n);
+      __node_base*
+      _M_get_previous_node(size_type __bkt, __node_type* __n);
 
       // Insert node __n with hash code __code, in bucket __bkt if no
       // rehash (assumes no element with same key already present).
@@ -772,7 +848,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       // Insert node __n with key __k and hash code __code.
       // Takes ownership of __n if insertion succeeds, throws otherwise.
       iterator
-      _M_insert_multi_node(__node_ptr __hint,
+      _M_insert_multi_node(__node_type* __hint,
 			   __hash_code __code, __node_ptr __n);
 
       template<typename... _Args>
@@ -830,7 +906,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       _M_erase(false_type __uks, const key_type&);
 
       iterator
-      _M_erase(size_type __bkt, __node_base_ptr __prev_n, __node_ptr __n);
+      _M_erase(size_type __bkt, __node_base* __prev_n, __node_ptr __n);
 
     public:
       // Emplace
@@ -890,7 +966,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	    const key_type& __k = __nh._M_key();
 	    __hash_code __code = this->_M_hash_code(__k);
 	    size_type __bkt = _M_bucket_index(__code);
-	    if (__node_ptr __n = _M_find_node(__bkt, __k, __code))
+	    if (__node_type* __n = _M_find_node(__bkt, __k, __code))
 	      {
 		__ret.node = std::move(__nh);
 		__ret.position = iterator(__n);
@@ -926,11 +1002,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 
     private:
       node_type
-      _M_extract_node(size_t __bkt, __node_base_ptr __prev_n)
+      _M_extract_node(size_t __bkt, __node_base* __prev_n)
       {
-	__node_ptr __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
+	__node_ptr __n = _S_cast(__prev_n->_M_nxt);
 	if (__prev_n == _M_buckets[__bkt])
-	  _M_remove_bucket_begin(__bkt, __n->_M_next(),
+	  _M_remove_bucket_begin(__bkt, __n->_M_next_ptr(),
 	     __n->_M_nxt ? _M_bucket_index(*__n->_M_next()) : 0);
 	else if (__n->_M_nxt)
 	  {
@@ -962,7 +1038,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	node_type __nh;
 	__hash_code __code = this->_M_hash_code(__k);
 	std::size_t __bkt = _M_bucket_index(__code);
-	if (__node_base_ptr __prev_node = _M_find_before_node(__bkt, __k, __code))
+	if (auto __prev_node = _M_find_before_node(__bkt, __k, __code))
 	  __nh = _M_extract_node(__bkt, __prev_node);
 	return __nh;
       }
@@ -1032,10 +1108,12 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
 	       _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
     _M_bucket_begin(size_type __bkt) const
-    -> __node_ptr
+    -> __node_type*
     {
-      __node_base_ptr __n = _M_buckets[__bkt];
-      return __n ? static_cast<__node_ptr>(__n->_M_nxt) : nullptr;
+      __node_base* __n = _M_buckets[__bkt];
+      return __n
+	? static_cast<__node_type*>(std::__to_address(__n->_M_nxt))
+	: nullptr;
     }
 
   template<typename _Key, typename _Value, typename _Alloc,
@@ -1123,7 +1201,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	      && __this_alloc != __that_alloc)
 	    {
 	      // Replacement allocator cannot free existing storage.
-	      this->_M_deallocate_nodes(_M_begin());
+	      this->_M_deallocate_nodes(_S_cast(_M_before_begin._M_nxt));
 	      _M_before_begin._M_nxt = nullptr;
 	      _M_deallocate_buckets();
 	      _M_buckets = nullptr;
@@ -1175,15 +1253,15 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	    _M_bucket_count = __ht._M_bucket_count;
 	  }
 	else
-	  __builtin_memset(_M_buckets, 0,
-			   _M_bucket_count * sizeof(__node_base_ptr));
+	  __builtin_memset(std::__to_address(_M_buckets), 0,
+			   _M_bucket_count * sizeof(__node_base*));
 
 	__try
 	  {
 	    __hashtable_base::operator=(std::forward<_Ht>(__ht));
 	    _M_element_count = __ht._M_element_count;
 	    _M_rehash_policy = __ht._M_rehash_policy;
-	    __reuse_or_alloc_node_gen_t __roan(_M_begin(), *this);
+	    __reuse_or_alloc_node_gen_t __roan(_M_before_begin._M_nxt, *this);
 	    _M_before_begin._M_nxt = nullptr;
 	    _M_assign(std::forward<_Ht>(__ht), __roan);
 	    if (__former_buckets)
@@ -1199,8 +1277,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 		_M_buckets = __former_buckets;
 		_M_bucket_count = __former_bucket_count;
 	      }
-	    __builtin_memset(_M_buckets, 0,
-			     _M_bucket_count * sizeof(__node_base_ptr));
+	    __builtin_memset(std::__to_address(_M_buckets), 0,
+			     _M_bucket_count * sizeof(__node_base*));
 	    __throw_exception_again;
 	  }
       }
@@ -1226,14 +1304,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 
 	    // First deal with the special first node pointed to by
 	    // _M_before_begin.
-	    __node_ptr __ht_n = __ht._M_begin();
+	    __node_type* __ht_n = __ht._M_begin();
 	    __node_ptr __this_n
 	      = __node_gen(__fwd_value_for<_Ht>(__ht_n->_M_v()));
 	    this->_M_copy_code(*__this_n, *__ht_n);
 	    _M_update_bbegin(__this_n);
 
 	    // Then deal with other nodes.
-	    __node_ptr __prev_n = __this_n;
+	    __node_base* __prev_n = std::__to_address(__this_n);
 	    for (__ht_n = __ht_n->_M_next(); __ht_n; __ht_n = __ht_n->_M_next())
 	      {
 		__this_n = __node_gen(__fwd_value_for<_Ht>(__ht_n->_M_v()));
@@ -1242,7 +1320,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 		size_type __bkt = _M_bucket_index(*__this_n);
 		if (!_M_buckets[__bkt])
 		  _M_buckets[__bkt] = __prev_n;
-		__prev_n = __this_n;
+		__prev_n = std::__to_address(__this_n);
 	      }
 	  }
 	__catch(...)
@@ -1266,7 +1344,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       _M_rehash_policy._M_reset();
       _M_bucket_count = 1;
       _M_single_bucket = nullptr;
-      _M_buckets = &_M_single_bucket;
+      _M_buckets = __buckets_ptr_traits::pointer_to(_M_single_bucket);
       _M_before_begin._M_nxt = nullptr;
       _M_element_count = 0;
     }
@@ -1283,7 +1361,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       if (__builtin_expect(std::__addressof(__ht) == this, false))
 	return;
 
-      this->_M_deallocate_nodes(_M_begin());
+      this->_M_deallocate_nodes(_S_cast(_M_before_begin._M_nxt));
       _M_deallocate_buckets();
       __hashtable_base::operator=(std::move(__ht));
       _M_rehash_policy = __ht._M_rehash_policy;
@@ -1291,7 +1369,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	_M_buckets = __ht._M_buckets;
       else
 	{
-	  _M_buckets = &_M_single_bucket;
+	  _M_buckets = __buckets_ptr_traits::pointer_to(_M_single_bucket);
 	  _M_single_bucket = __ht._M_single_bucket;
 	}
 
@@ -1368,7 +1446,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       // Update buckets if __ht is using its single bucket.
       if (__ht._M_uses_single_bucket())
 	{
-	  _M_buckets = &_M_single_bucket;
+	  _M_buckets = __buckets_ptr_traits::pointer_to(_M_single_bucket);
 	  _M_single_bucket = __ht._M_single_bucket;
 	}
 
@@ -1419,7 +1497,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	{
 	  if (__ht._M_uses_single_bucket())
 	    {
-	      _M_buckets = &_M_single_bucket;
+	      _M_buckets = __buckets_ptr_traits::pointer_to(_M_single_bucket);
 	      _M_single_bucket = __ht._M_single_bucket;
 	    }
 	  else
@@ -1427,7 +1505,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 
 	  // Fix bucket containing the _M_before_begin pointer that can't be
 	  // moved.
-	  _M_update_bbegin(__ht._M_begin());
+	  _M_update_bbegin(__ht._M_before_begin._M_nxt);
 
 	  __ht._M_reset();
 	}
@@ -1480,13 +1558,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	  if (!__x._M_uses_single_bucket())
 	    {
 	      _M_buckets = __x._M_buckets;
-	      __x._M_buckets = &__x._M_single_bucket;
+	      __x._M_buckets =
+		__buckets_ptr_traits::pointer_to(__x._M_single_bucket);
 	    }
 	}
       else if (__x._M_uses_single_bucket())
 	{
 	  __x._M_buckets = _M_buckets;
-	  _M_buckets = &_M_single_bucket;
+	  _M_buckets = __buckets_ptr_traits::pointer_to(_M_single_bucket);
 	}	
       else
 	std::swap(_M_buckets, __x._M_buckets);
@@ -1626,13 +1705,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	       _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
     _M_find_before_node(size_type __bkt, const key_type& __k,
 			__hash_code __code) const
-    -> __node_base_ptr
+    -> __node_base*
     {
-      __node_base_ptr __prev_p = _M_buckets[__bkt];
+      __node_base* __prev_p = _M_buckets[__bkt];
       if (!__prev_p)
 	return nullptr;
 
-      for (__node_ptr __p = static_cast<__node_ptr>(__prev_p->_M_nxt);;
+      for (__node_type* __p =
+	     static_cast<__node_type*>(std::__to_address(__prev_p->_M_nxt));;
 	   __p = __p->_M_next())
 	{
 	  if (this->_M_equals(__k, __code, *__p))
@@ -1673,7 +1753,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	  if (__node->_M_nxt)
 	    // We must update former begin bucket that is pointing to
 	    // _M_before_begin.
-	    _M_buckets[_M_bucket_index(*__node->_M_next())] = __node;
+	    _M_buckets[_M_bucket_index(*__node->_M_next())] =
+	      std::__to_address(__node);
 
 	  _M_buckets[__bkt] = &_M_before_begin;
 	}
@@ -1710,12 +1791,12 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     auto
     _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
 	       _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
-    _M_get_previous_node(size_type __bkt, __node_ptr __n)
-    -> __node_base_ptr
+    _M_get_previous_node(size_type __bkt, __node_type* __n)
+    -> __node_base*
     {
-      __node_base_ptr __prev_n = _M_buckets[__bkt];
-      while (__prev_n->_M_nxt != __n)
-	__prev_n = __prev_n->_M_nxt;
+      __node_base* __prev_n = _M_buckets[__bkt];
+      while (std::__to_address(__prev_n->_M_nxt) != __n)
+	__prev_n = std::__to_address(__prev_n->_M_nxt);
       return __prev_n;
     }
 
@@ -1735,7 +1816,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	const key_type& __k = _ExtractKey{}(__node._M_node->_M_v());
 	__hash_code __code = this->_M_hash_code(__k);
 	size_type __bkt = _M_bucket_index(__code);
-	if (__node_ptr __p = _M_find_node(__bkt, __k, __code))
+	if (__node_type* __p = _M_find_node(__bkt, __k, __code))
 	  // There is already an equivalent node, no insertion
 	  return std::make_pair(iterator(__p), false);
 
@@ -1795,7 +1876,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       // Always insert at the beginning of the bucket.
       _M_insert_bucket_begin(__bkt, __node);
       ++_M_element_count;
-      return iterator(__node);
+      return iterator(std::__to_address(__node));
     }
 
   template<typename _Key, typename _Value, typename _Alloc,
@@ -1805,7 +1886,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     auto
     _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
 	       _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
-    _M_insert_multi_node(__node_ptr __hint,
+    _M_insert_multi_node(__node_type* __hint,
 			 __hash_code __code, __node_ptr __node)
     -> iterator
     {
@@ -1822,7 +1903,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 
       // Find the node before an equivalent one or use hint if it exists and
       // if it is equivalent.
-      __node_base_ptr __prev
+      __node_base* __prev
 	= __builtin_expect(__hint != nullptr, false)
 	  && this->_M_equals(__k, __code, *__hint)
 	    ? __hint
@@ -1841,7 +1922,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	      {
 		size_type __next_bkt = _M_bucket_index(*__node->_M_next());
 		if (__next_bkt != __bkt)
-		  _M_buckets[__next_bkt] = __node;
+		  _M_buckets[__next_bkt] = std::__to_address(__node);
 	      }
 	}
       else
@@ -1850,7 +1931,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	// equivalent elements' relative positions.
 	_M_insert_bucket_begin(__bkt, __node);
       ++_M_element_count;
-      return iterator(__node);
+      return iterator(std::__to_address(__node));
     }
 
   // Insert v if no element with its key is already present.
@@ -1870,8 +1951,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	__hash_code __code = this->_M_hash_code(__k);
 	size_type __bkt = _M_bucket_index(__code);
 
-	if (__node_ptr __node = _M_find_node(__bkt, __k, __code))
-	  return { iterator(__node), false };
+	if (__node_type* __n = _M_find_node(__bkt, __k, __code))
+	  return { iterator(__n), false };
 
 	_Scoped_node __node{ __node_gen(std::forward<_Arg>(__v)), this };
 	auto __pos
@@ -1916,14 +1997,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     erase(const_iterator __it)
     -> iterator
     {
-      __node_ptr __n = __it._M_cur;
+      __node_type* __n = __it._M_cur;
       std::size_t __bkt = _M_bucket_index(*__n);
 
       // Look for previous node to unlink it from the erased one, this
       // is why we need buckets to contain the before begin to make
       // this search fast.
-      __node_base_ptr __prev_n = _M_get_previous_node(__bkt, __n);
-      return _M_erase(__bkt, __prev_n, __n);
+      __node_base* __prev_n = _M_get_previous_node(__bkt, __n);
+      return _M_erase(__bkt, __prev_n, _S_cast(__prev_n->_M_nxt));
     }
 
   template<typename _Key, typename _Value, typename _Alloc,
@@ -1933,11 +2014,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     auto
     _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
 	       _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
-    _M_erase(size_type __bkt, __node_base_ptr __prev_n, __node_ptr __n)
+    _M_erase(size_type __bkt, __node_base* __prev_n, __node_ptr __n)
     -> iterator
     {
       if (__prev_n == _M_buckets[__bkt])
-	_M_remove_bucket_begin(__bkt, __n->_M_next(),
+	_M_remove_bucket_begin(__bkt, __n->_M_next_ptr(),
 	  __n->_M_nxt ? _M_bucket_index(*__n->_M_next()) : 0);
       else if (__n->_M_nxt)
 	{
@@ -1968,12 +2049,12 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       std::size_t __bkt = _M_bucket_index(__code);
 
       // Look for the node before the first matching node.
-      __node_base_ptr __prev_n = _M_find_before_node(__bkt, __k, __code);
+      __node_base* __prev_n = _M_find_before_node(__bkt, __k, __code);
       if (!__prev_n)
 	return 0;
 
       // We found a matching node, erase it.
-      __node_ptr __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
+      __node_ptr __n = _S_cast(__prev_n->_M_nxt);
       _M_erase(__bkt, __prev_n, __n);
       return 1;
     }
@@ -1992,7 +2073,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       std::size_t __bkt = _M_bucket_index(__code);
 
       // Look for the node before the first matching node.
-      __node_base_ptr __prev_n = _M_find_before_node(__bkt, __k, __code);
+      __node_base* __prev_n = _M_find_before_node(__bkt, __k, __code);
       if (!__prev_n)
 	return 0;
 
@@ -2002,8 +2083,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       // We use one loop to find all matching nodes and another to deallocate
       // them so that the key stays valid during the first loop. It might be
       // invalidated indirectly when destroying nodes.
-      __node_ptr __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
-      __node_ptr __n_last = __n->_M_next();
+      __node_ptr __n = _S_cast(__prev_n->_M_nxt);
+      __node_type* __n_last = __n->_M_next();
       while (__n_last && this->_M_node_equals(*__n, *__n_last))
 	__n_last = __n_last->_M_next();
 
@@ -2013,19 +2094,19 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       size_type __result = 0;
       do
 	{
-	  __node_ptr __p = __n->_M_next();
+	  __node_ptr __p = __n->_M_next_ptr();
 	  this->_M_deallocate_node(__n);
 	  __n = __p;
 	  ++__result;
 	}
-      while (__n != __n_last);
+      while (std::__to_address(__n) != __n_last);
 
       _M_element_count -= __result;
       if (__prev_n == _M_buckets[__bkt])
-	_M_remove_bucket_begin(__bkt, __n_last, __n_last_bkt);
+	_M_remove_bucket_begin(__bkt, __n, __n_last_bkt);
       else if (__n_last_bkt != __bkt)
 	_M_buckets[__n_last_bkt] = __prev_n;
-      __prev_n->_M_nxt = __n_last;
+      __prev_n->_M_nxt = __n;
       return __result;
     }
 
@@ -2039,41 +2120,42 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     erase(const_iterator __first, const_iterator __last)
     -> iterator
     {
-      __node_ptr __n = __first._M_cur;
-      __node_ptr __last_n = __last._M_cur;
+      __node_type* __n = __first._M_cur;
+      __node_type* __last_n = __last._M_cur;
       if (__n == __last_n)
 	return iterator(__n);
 
       std::size_t __bkt = _M_bucket_index(*__n);
 
-      __node_base_ptr __prev_n = _M_get_previous_node(__bkt, __n);
+      __node_base* __prev_n = _M_get_previous_node(__bkt, __n);
+      __node_ptr __nptr = _S_cast(__prev_n->_M_nxt);
       bool __is_bucket_begin = __n == _M_bucket_begin(__bkt);
       std::size_t __n_bkt = __bkt;
       for (;;)
 	{
 	  do
 	    {
-	      __node_ptr __tmp = __n;
-	      __n = __n->_M_next();
+	      __node_ptr __tmp = __nptr;
+	      __nptr = __nptr->_M_next_ptr();
 	      this->_M_deallocate_node(__tmp);
 	      --_M_element_count;
-	      if (!__n)
+	      if (!__nptr)
 		break;
-	      __n_bkt = _M_bucket_index(*__n);
+	      __n_bkt = _M_bucket_index(*__nptr);
 	    }
-	  while (__n != __last_n && __n_bkt == __bkt);
+	  while (std::__to_address(__nptr) != __last_n && __n_bkt == __bkt);
 	  if (__is_bucket_begin)
-	    _M_remove_bucket_begin(__bkt, __n, __n_bkt);
-	  if (__n == __last_n)
+	    _M_remove_bucket_begin(__bkt, __nptr, __n_bkt);
+	  if (std::__to_address(__nptr) == __last_n)
 	    break;
 	  __is_bucket_begin = true;
 	  __bkt = __n_bkt;
 	}
 
-      if (__n && (__n_bkt != __bkt || __is_bucket_begin))
+      if (__nptr && (__n_bkt != __bkt || __is_bucket_begin))
 	_M_buckets[__n_bkt] = __prev_n;
-      __prev_n->_M_nxt = __n;
-      return iterator(__n);
+      __prev_n->_M_nxt = __nptr;
+      return iterator(std::__to_address(__nptr));
     }
 
   template<typename _Key, typename _Value, typename _Alloc,
@@ -2085,9 +2167,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	       _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
     clear() noexcept
     {
-      this->_M_deallocate_nodes(_M_begin());
-      __builtin_memset(_M_buckets, 0,
-		       _M_bucket_count * sizeof(__node_base_ptr));
+      this->_M_deallocate_nodes(_S_cast(_M_before_begin._M_nxt));
+      __builtin_memset(std::__to_address(_M_buckets), 0,
+		       _M_bucket_count * sizeof(__node_base*));
       _M_element_count = 0;
       _M_before_begin._M_nxt = nullptr;
     }
@@ -2148,12 +2230,12 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     _M_rehash_aux(size_type __bkt_count, true_type /* __uks */)
     {
       __buckets_ptr __new_buckets = _M_allocate_buckets(__bkt_count);
-      __node_ptr __p = _M_begin();
+      __node_type* __p = _M_begin();
       _M_before_begin._M_nxt = nullptr;
       std::size_t __bbegin_bkt = 0;
       while (__p)
 	{
-	  __node_ptr __next = __p->_M_next();
+	  __node_type* __next = __p->_M_next();
 	  std::size_t __bkt
 	    = __hash_code_base::_M_bucket_index(*__p, __bkt_count);
 	  if (!__new_buckets[__bkt])
@@ -2191,16 +2273,16 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     _M_rehash_aux(size_type __bkt_count, false_type /* __uks */)
     {
       __buckets_ptr __new_buckets = _M_allocate_buckets(__bkt_count);
-      __node_ptr __p = _M_begin();
+      __node_type* __p = _M_begin();
       _M_before_begin._M_nxt = nullptr;
       std::size_t __bbegin_bkt = 0;
       std::size_t __prev_bkt = 0;
-      __node_ptr __prev_p = nullptr;
+      __node_type* __prev_p = nullptr;
       bool __check_bucket = false;
 
       while (__p)
 	{
-	  __node_ptr __next = __p->_M_next();
+	  __node_type* __next = __p->_M_next();
 	  std::size_t __bkt
 	    = __hash_code_base::_M_bucket_index(*__p, __bkt_count);
 
diff --git a/libstdc++-v3/include/bits/hashtable_policy.h b/libstdc++-v3/include/bits/hashtable_policy.h
index 28372979c87..131a16b1969 100644
--- a/libstdc++-v3/include/bits/hashtable_policy.h
+++ b/libstdc++-v3/include/bits/hashtable_policy.h
@@ -31,9 +31,9 @@
 #ifndef _HASHTABLE_POLICY_H
 #define _HASHTABLE_POLICY_H 1
 
-#include <tuple>		// for std::tuple, std::forward_as_tuple
-#include <bits/stl_algobase.h>	// for std::min, std::is_permutation.
-#include <ext/numeric_traits.h>	// for __gnu_cxx::__int_traits
+#include <tuple>		     // for std::tuple, std::forward_as_tuple
+#include <bits/stl_algobase.h>	     // for std::min, std::is_permutation.
+#include <ext/numeric_traits.h>	     // for __gnu_cxx::__int_traits
 
 namespace std _GLIBCXX_VISIBILITY(default)
 {
@@ -59,24 +59,29 @@ namespace __detail
 
   // Helper function: return distance(first, last) for forward
   // iterators, or 0/1 for input iterators.
-  template<class _Iterator>
+  template<typename _Iterator>
     inline typename std::iterator_traits<_Iterator>::difference_type
     __distance_fw(_Iterator __first, _Iterator __last,
 		  std::input_iterator_tag)
     { return __first != __last ? 1 : 0; }
 
-  template<class _Iterator>
+  template<typename _Iterator>
     inline typename std::iterator_traits<_Iterator>::difference_type
     __distance_fw(_Iterator __first, _Iterator __last,
 		  std::forward_iterator_tag)
     { return std::distance(__first, __last); }
 
-  template<class _Iterator>
+  template<typename _Iterator>
     inline typename std::iterator_traits<_Iterator>::difference_type
     __distance_fw(_Iterator __first, _Iterator __last)
     { return __distance_fw(__first, __last,
 			   std::__iterator_category(__first)); }
 
+  template<typename _Alloc, typename _Value>
+    using __alloc_val_ptr =
+      typename std::allocator_traits<__alloc_rebind<_Alloc,
+						    _Value>>::pointer;
+
   struct _Identity
   {
     template<typename _Tp>
@@ -94,6 +99,10 @@ namespace __detail
       { return std::get<0>(std::forward<_Tp>(__x)); }
   };
 
+#if !_GLIBCXX_INLINE_VERSION
+  struct _Hash_node_base;
+#endif
+
   template<typename _NodeAlloc>
     struct _Hashtable_alloc;
 
@@ -107,24 +116,28 @@ namespace __detail
       using __hashtable_alloc = _Hashtable_alloc<__node_alloc_type>;
       using __node_alloc_traits =
 	typename __hashtable_alloc::__node_alloc_traits;
-      using __node_type = typename __hashtable_alloc::__node_type;
+      using __node_ptr = typename __hashtable_alloc::__node_ptr;
 
     public:
-      _ReuseOrAllocNode(__node_type* __nodes, __hashtable_alloc& __h)
+      _ReuseOrAllocNode(__node_ptr __nodes, __hashtable_alloc& __h)
       : _M_nodes(__nodes), _M_h(__h) { }
+#if !_GLIBCXX_INLINE_VERSION
+      _ReuseOrAllocNode(_Hash_node_base* __nodes, __hashtable_alloc& __h)
+      : _ReuseOrAllocNode(static_cast<__node_ptr>(__nodes), __h) { }
+#endif
       _ReuseOrAllocNode(const _ReuseOrAllocNode&) = delete;
 
       ~_ReuseOrAllocNode()
       { _M_h._M_deallocate_nodes(_M_nodes); }
 
       template<typename _Arg>
-	__node_type*
+	__node_ptr
 	operator()(_Arg&& __arg) const
 	{
 	  if (_M_nodes)
 	    {
-	      __node_type* __node = _M_nodes;
-	      _M_nodes = _M_nodes->_M_next();
+	      __node_ptr __node = _M_nodes;
+	      _M_nodes = _M_nodes->_M_next_ptr();
 	      __node->_M_nxt = nullptr;
 	      auto& __a = _M_h._M_node_allocator();
 	      __node_alloc_traits::destroy(__a, __node->_M_valptr());
@@ -144,7 +157,7 @@ namespace __detail
 	}
 
     private:
-      mutable __node_type* _M_nodes;
+      mutable __node_ptr _M_nodes;
       __hashtable_alloc& _M_h;
     };
 
@@ -155,14 +168,14 @@ namespace __detail
     {
     private:
       using __hashtable_alloc = _Hashtable_alloc<_NodeAlloc>;
-      using __node_type = typename __hashtable_alloc::__node_type;
+      using __node_ptr = typename __hashtable_alloc::__node_ptr;
 
     public:
       _AllocNode(__hashtable_alloc& __h)
       : _M_h(__h) { }
 
       template<typename _Arg>
-	__node_type*
+	__node_ptr
 	operator()(_Arg&& __arg) const
 	{ return _M_h._M_allocate_node(std::forward<_Arg>(__arg)); }
 
@@ -203,6 +216,7 @@ namespace __detail
       using __unique_keys = __bool_constant<_Unique_keys>;
     };
 
+#if !_GLIBCXX_INLINE_VERSION
   /**
    *  struct _Hash_node_base
    *
@@ -219,6 +233,26 @@ namespace __detail
 
     _Hash_node_base(_Hash_node_base* __next) noexcept : _M_nxt(__next) { }
   };
+#endif
+
+  /**
+   * struct _Hash_node_cust_ptr_base
+   *
+   * Like _Hash_node_base but used in case of custom pointer type
+   * defined in the allocator.
+   */
+  template<typename _NodePtr>
+    struct _Hash_node_cust_ptr_base
+    {
+      using __node_ptr = _NodePtr;
+
+      __node_ptr _M_nxt;
+
+      _Hash_node_cust_ptr_base() noexcept : _M_nxt() { }
+
+      _Hash_node_cust_ptr_base(__node_ptr __next) noexcept : _M_nxt(__next) { }
+    };
+
 
   /**
    *  struct _Hash_node_value_base
@@ -263,12 +297,21 @@ namespace __detail
     struct _Hash_node_code_cache<true>
     { std::size_t  _M_hash_code; };
 
+#if !_GLIBCXX_INLINE_VERSION
   template<typename _Value, bool _Cache_hash_code>
     struct _Hash_node_value
     : _Hash_node_value_base<_Value>
     , _Hash_node_code_cache<_Cache_hash_code>
     { };
+#endif
 
+  template<typename _Value, bool _Cache_hash_code>
+    struct _Hash_node_cache_value
+    : _Hash_node_code_cache<_Cache_hash_code>
+    , _Hash_node_value_base<_Value>
+    { };
+
+#if !_GLIBCXX_INLINE_VERSION
   /**
    *  Primary template struct _Hash_node.
    */
@@ -277,21 +320,63 @@ namespace __detail
     : _Hash_node_base
     , _Hash_node_value<_Value, _Cache_hash_code>
     {
+      using __node_base = _Hash_node_base;
+      using __node_ptr = _Hash_node*;
+      using __node_type = _Hash_node;
+      using __node_value_cache_type =
+	_Hash_node_value<_Value, _Cache_hash_code>;
+
       _Hash_node*
       _M_next() const noexcept
       { return static_cast<_Hash_node*>(this->_M_nxt); }
+
+      __node_ptr
+      _M_next_ptr() const noexcept
+      { return _M_next(); }
+    };
+#endif
+
+  /**
+   *  Primary template struct _Hash_cust_ptr_node.
+   */
+  template<typename _Ptr, bool _Cache_hash_code>
+    struct _Hash_cust_ptr_node
+    : _Hash_node_cust_ptr_base<__ptr_rebind<_Ptr,
+				   _Hash_cust_ptr_node<_Ptr, _Cache_hash_code>>>
+    , _Hash_node_cache_value<typename std::pointer_traits<_Ptr>::element_type,
+			     _Cache_hash_code>
+    {
+      using __node_base =
+	_Hash_node_cust_ptr_base<__ptr_rebind<_Ptr,
+				_Hash_cust_ptr_node<_Ptr, _Cache_hash_code>>>;
+      using __node_ptr = typename __node_base::__node_ptr;
+      using __node_type =
+	typename std::pointer_traits<__node_ptr>::element_type;
+      using value_type = typename __node_type::value_type;
+      using __node_value_cache_type =
+	_Hash_node_cache_value<value_type, _Cache_hash_code>;
+      typedef typename std::pointer_traits<__node_ptr>::difference_type
+							difference_type;
+
+      __node_type*
+      _M_next() const noexcept
+      { return std::__to_address(this->_M_nxt); }
+
+      __node_ptr
+      _M_next_ptr() const noexcept
+      { return this->_M_nxt; }
     };
 
   /// Base class for node iterators.
-  template<typename _Value, bool _Cache_hash_code>
-    struct _Node_iterator_base
+  template<typename _NodeType>
+    struct _Hashtable_iterator_base
     {
-      using __node_type = _Hash_node<_Value, _Cache_hash_code>;
+      using __node_type = _NodeType;
 
       __node_type* _M_cur;
 
-      _Node_iterator_base() = default;
-      _Node_iterator_base(__node_type* __p) noexcept
+      _Hashtable_iterator_base() = default;
+      _Hashtable_iterator_base(__node_type* __p) noexcept
       : _M_cur(__p) { }
 
       void
@@ -299,18 +384,33 @@ namespace __detail
       { _M_cur = _M_cur->_M_next(); }
 
       friend bool
-      operator==(const _Node_iterator_base& __x, const _Node_iterator_base& __y)
+      operator==(const _Hashtable_iterator_base& __x, const _Hashtable_iterator_base& __y)
       noexcept
       { return __x._M_cur == __y._M_cur; }
 
 #if __cpp_impl_three_way_comparison < 201907L
       friend bool
-      operator!=(const _Node_iterator_base& __x, const _Node_iterator_base& __y)
+      operator!=(const _Hashtable_iterator_base& __x, const _Hashtable_iterator_base& __y)
       noexcept
       { return __x._M_cur != __y._M_cur; }
 #endif
     };
 
+#if !_GLIBCXX_INLINE_VERSION
+  /// Base class for node iterators.
+  template<typename _Value, bool _Cache_hash_code>
+    struct _Node_iterator_base
+    : _Hashtable_iterator_base<_Hash_node<_Value, _Cache_hash_code>>
+    {
+      using __base_type =
+	_Hashtable_iterator_base<_Hash_node<_Value, _Cache_hash_code>>;
+      using __node_type = typename __base_type::__node_type;
+
+      _Node_iterator_base() = default;
+      _Node_iterator_base(__node_type* __p) noexcept
+      : __base_type(__p) { }
+    };
+
   /// Node iterators, used to iterate through all the hashtable.
   template<typename _Value, bool __constant_iterators, bool __cache>
     struct _Node_iterator
@@ -413,6 +513,111 @@ namespace __detail
 	return __tmp;
       }
     };
+#endif
+
+  /// Node iterators, used to iterate through all the hashtable.
+  template<typename _NodeType, bool __constant_iterators>
+    struct _Hashtable_iterator
+    : public _Hashtable_iterator_base<_NodeType>
+    {
+    private:
+      using __base_type = _Hashtable_iterator_base<_NodeType>;
+      using __node_type = typename __base_type::__node_type;
+
+    public:
+      typedef typename __node_type::value_type		value_type;
+      typedef typename __node_type::difference_type	difference_type;
+      typedef std::forward_iterator_tag			iterator_category;
+
+      using pointer = typename std::conditional<__constant_iterators,
+				  const value_type*, value_type*>::type;
+
+      using reference = typename std::conditional<__constant_iterators,
+				  const value_type&, value_type&>::type;
+
+      _Hashtable_iterator() noexcept
+      : __base_type(nullptr) { }
+
+      explicit
+      _Hashtable_iterator(__node_type* __p) noexcept
+      : __base_type(__p) { }
+
+      reference
+      operator*() const noexcept
+      { return this->_M_cur->_M_v(); }
+
+      pointer
+      operator->() const noexcept
+      { return this->_M_cur->_M_valptr(); }
+
+      _Hashtable_iterator&
+      operator++() noexcept
+      {
+	this->_M_incr();
+	return *this;
+      }
+
+      _Hashtable_iterator
+      operator++(int) noexcept
+      {
+	_Hashtable_iterator __tmp(*this);
+	this->_M_incr();
+	return __tmp;
+      }
+    };
+
+  /// Node const_iterators, used to iterate through all the hashtable.
+  template<typename _NodeType, bool __constant_iterators>
+    struct _Hashtable_const_iterator
+    : public _Hashtable_iterator_base<_NodeType>
+    {
+    private:
+      using __base_type = _Hashtable_iterator_base<_NodeType>;
+      using __node_type = typename __base_type::__node_type;
+
+    public:
+      typedef typename __node_type::value_type		value_type;
+      typedef typename __node_type::difference_type	difference_type;
+      typedef std::forward_iterator_tag			iterator_category;
+
+      typedef const value_type*				pointer;
+      typedef const value_type&				reference;
+
+      _Hashtable_const_iterator() noexcept
+      : __base_type(nullptr) { }
+
+      explicit
+      _Hashtable_const_iterator(__node_type* __p) noexcept
+      : __base_type(__p) { }
+
+      _Hashtable_const_iterator(
+	const _Hashtable_iterator<_NodeType,
+				  __constant_iterators>& __x) noexcept
+      : __base_type(__x._M_cur) { }
+
+      reference
+      operator*() const noexcept
+      { return this->_M_cur->_M_v(); }
+
+      pointer
+      operator->() const noexcept
+      { return this->_M_cur->_M_valptr(); }
+
+      _Hashtable_const_iterator&
+      operator++() noexcept
+      {
+	this->_M_incr();
+	return *this;
+      }
+
+      _Hashtable_const_iterator
+      operator++(int) noexcept
+      {
+	_Hashtable_const_iterator __tmp(*this);
+	this->_M_incr();
+	return __tmp;
+      }
+    };
 
   // Many of class template _Hashtable's template parameters are policy
   // classes.  These are defaults for the policies.
@@ -800,16 +1005,22 @@ namespace __detail
 
       using __hash_cached = typename _Traits::__hash_cached;
       using __constant_iterators = typename _Traits::__constant_iterators;
-
-      using __hashtable_alloc = _Hashtable_alloc<
-	__alloc_rebind<_Alloc, _Hash_node<_Value,
-					  __hash_cached::value>>>;
+      using __alloc_ptr = __alloc_val_ptr<_Alloc, _Value>;
+
+      using __node_type =
+#if _GLIBCXX_INLINE_VERSION
+	_Hash_cust_ptr_node<__alloc_ptr, __hash_cached::value>;
+#else
+	typename std::conditional<std::__is_pointer<__alloc_ptr>::__value,
+	    _Hash_node<_Value, __hash_cached::value>,
+	    _Hash_cust_ptr_node<__alloc_ptr, __hash_cached::value>>::type;
+#endif
+      using __node_alloc_type = __alloc_rebind<_Alloc, __node_type>;
 
       using value_type = typename __hashtable_base::value_type;
       using size_type = typename __hashtable_base::size_type;
 
       using __unique_keys = typename _Traits::__unique_keys;
-      using __node_alloc_type = typename __hashtable_alloc::__node_alloc_type;
       using __node_gen_type = _AllocNode<__node_alloc_type>;
 
       __hashtable&
@@ -827,11 +1038,27 @@ namespace __detail
 			const _NodeGetter&, false_type __uks);
 
     public:
-      using iterator = _Node_iterator<_Value, __constant_iterators::value,
-				      __hash_cached::value>;
+      using iterator =
+#if _GLIBCXX_INLINE_VERSION
+	_Hashtable_iterator<__node_type, __constant_iterators::value>;
+#else
+	typename std::conditional<std::__is_pointer<__alloc_ptr>::__value,
+	  _Node_iterator<_Value, __constant_iterators::value,
+			 __hash_cached::value>,
+	  _Hashtable_iterator<__node_type,
+			      __constant_iterators::value>>::type;
+#endif
 
-      using const_iterator = _Node_const_iterator<_Value, __constant_iterators::value,
-						  __hash_cached::value>;
+      using const_iterator =
+#if _GLIBCXX_INLINE_VERSION
+	_Hashtable_const_iterator<__node_type, __constant_iterators::value>;
+#else
+	typename std::conditional<std::__is_pointer<__alloc_ptr>::__value,
+	  _Node_const_iterator<_Value, __constant_iterators::value,
+			       __hash_cached::value>,
+	  _Hashtable_const_iterator<__node_type,
+				    __constant_iterators::value>>::type;
+#endif
 
       using __ireturn_type = typename std::conditional<__unique_keys::value,
 						     std::pair<iterator, bool>,
@@ -1154,6 +1381,7 @@ namespace __detail
       _Tp _M_tp;
     };
 
+#if !_GLIBCXX_INLINE_VERSION
   /**
    *  Primary class template _Local_iterator_base.
    *
@@ -1164,6 +1392,18 @@ namespace __detail
 	   typename _Hash, typename _RangeHash, typename _Unused,
 	   bool __cache_hash_code>
     struct _Local_iterator_base;
+#endif
+
+  /**
+   *  Primary class template _Hashtable_local_iter_base.
+   *
+   *  Base class for local iterators, used to iterate within a bucket
+   *  but not between buckets.
+   */
+  template<typename _Key, typename _NodeType, typename _ExtractKey,
+	   typename _Hash, typename _RangeHash, typename _Unused,
+	   bool __cache_hash_code>
+    struct _Hashtable_local_iter_base;
 
   /**
    *  Primary class template _Hash_code_base.
@@ -1192,9 +1432,11 @@ namespace __detail
     private:
       using __ebo_hash = _Hashtable_ebo_helper<1, _Hash>;
 
+#if !_GLIBCXX_INLINE_VERSION
       // Gives the local iterator implementation access to _M_bucket_index().
       friend struct _Local_iterator_base<_Key, _Value, _ExtractKey,
 					 _Hash, _RangeHash, _Unused, false>;
+#endif
 
     public:
       typedef _Hash					hasher;
@@ -1223,6 +1465,7 @@ namespace __detail
       _M_bucket_index(__hash_code __c, std::size_t __bkt_count) const
       { return _RangeHash{}(__c, __bkt_count); }
 
+#if !_GLIBCXX_INLINE_VERSION
       std::size_t
       _M_bucket_index(const _Hash_node_value<_Value, false>& __n,
 		      std::size_t __bkt_count) const
@@ -1233,13 +1476,34 @@ namespace __detail
 	return _RangeHash{}(_M_hash_code(_ExtractKey{}(__n._M_v())),
 			    __bkt_count);
       }
+#endif
+
+      std::size_t
+      _M_bucket_index(const _Hash_node_cache_value<_Value, false>& __n,
+		      std::size_t __bkt_count) const
+	noexcept( noexcept(declval<const _Hash&>()(declval<const _Key&>()))
+		  && noexcept(declval<const _RangeHash&>()((__hash_code)0,
+							   (std::size_t)0)) )
+      {
+	return _RangeHash{}(_M_hash_code(_ExtractKey{}(__n._M_v())),
+			    __bkt_count);
+      }
 
+#if !_GLIBCXX_INLINE_VERSION
       std::size_t
       _M_bucket_index(const _Hash_node_value<_Value, true>& __n,
 		      std::size_t __bkt_count) const
 	noexcept( noexcept(declval<const _RangeHash&>()((__hash_code)0,
 							(std::size_t)0)) )
       { return _RangeHash{}(__n._M_hash_code, __bkt_count); }
+#endif
+
+      std::size_t
+      _M_bucket_index(const _Hash_node_cache_value<_Value, true>& __n,
+		      std::size_t __bkt_count) const
+	noexcept( noexcept(declval<const _RangeHash&>()((__hash_code)0,
+							(std::size_t)0)) )
+      { return _RangeHash{}(__n._M_hash_code, __bkt_count); }
 
       void
       _M_store_code(_Hash_node_code_cache<false>&, __hash_code) const
@@ -1267,6 +1531,7 @@ namespace __detail
       _M_hash() const { return __ebo_hash::_M_cget(); }
     };
 
+#if !_GLIBCXX_INLINE_VERSION
   /// Partial specialization used when nodes contain a cached hash code.
   template<typename _Key, typename _Value, typename _ExtractKey,
 	   typename _Hash, typename _RangeHash, typename _Unused>
@@ -1306,6 +1571,48 @@ namespace __detail
       std::size_t
       _M_get_bucket() const { return _M_bucket; }  // for debug mode
     };
+#endif
+
+  /// Partial specialization used when nodes contain a cached hash code.
+  template<typename _Key, typename _NodeType, typename _ExtractKey,
+	   typename _Hash, typename _RangeHash, typename _Unused>
+    struct _Hashtable_local_iter_base<_Key, _NodeType, _ExtractKey,
+				      _Hash, _RangeHash, _Unused, true>
+    : public _Hashtable_iterator_base<_NodeType>
+    {
+    protected:
+      using __base_node_iter = _Hashtable_iterator_base<_NodeType>;
+      using value_type = typename _NodeType::value_type;
+      using __hash_code_base = _Hash_code_base<_Key, value_type, _ExtractKey,
+					      _Hash, _RangeHash, _Unused, true>;
+
+      _Hashtable_local_iter_base() = default;
+      _Hashtable_local_iter_base(const __hash_code_base&,
+				 _NodeType* __p,
+				 std::size_t __bkt, std::size_t __bkt_count)
+      : __base_node_iter(__p), _M_bucket(__bkt), _M_bucket_count(__bkt_count)
+      { }
+
+      void
+      _M_incr()
+      {
+	__base_node_iter::_M_incr();
+	if (this->_M_cur)
+	  {
+	    std::size_t __bkt
+	      = _RangeHash{}(this->_M_cur->_M_hash_code, _M_bucket_count);
+	    if (__bkt != _M_bucket)
+	      this->_M_cur = nullptr;
+	  }
+      }
+
+      std::size_t _M_bucket;
+      std::size_t _M_bucket_count;
+
+    public:
+      std::size_t
+      _M_get_bucket() const { return _M_bucket; }  // for debug mode
+    };
 
   // Uninitialized storage for a _Hash_code_base.
   // This type is DefaultConstructible and Assignable even if the
@@ -1338,6 +1645,7 @@ namespace __detail
       _M_h() const { return reinterpret_cast<const _Tp*>(this); }
     };
 
+#if !_GLIBCXX_INLINE_VERSION
   template<typename _Key, typename _Value, typename _ExtractKey,
 	   typename _Hash, typename _RangeHash, typename _Unused>
     using __hash_code_for_local_iter
@@ -1420,7 +1728,87 @@ namespace __detail
       std::size_t
       _M_get_bucket() const { return _M_bucket; }  // for debug mode
     };
+#endif
+
+  // Partial specialization used when hash codes are not cached
+  template<typename _Key, typename _NodeType, typename _ExtractKey,
+	   typename _Hash, typename _RangeHash, typename _Unused>
+    struct _Hashtable_local_iter_base<_Key, _NodeType, _ExtractKey,
+				      _Hash, _RangeHash, _Unused, false>
+    : _Hash_code_storage<_Hash>
+    , _Hashtable_iterator_base<_NodeType>
+    {
+    protected:
+      using value_type = typename _NodeType::value_type;
+      using __hash_code_base = _Hash_code_base<_Key, value_type, _ExtractKey,
+					     _Hash, _RangeHash, _Unused, false>;
+      using __node_iter_base = _Hashtable_iterator_base<_NodeType>;
+
+      _Hashtable_local_iter_base() : _M_bucket_count(-1) { }
+
+      _Hashtable_local_iter_base(const __hash_code_base& __base,
+				 _NodeType* __p,
+				 std::size_t __bkt, std::size_t __bkt_count)
+      : __node_iter_base(__p), _M_bucket(__bkt), _M_bucket_count(__bkt_count)
+      { _M_init(__base.hash_function()); }
+
+      ~_Hashtable_local_iter_base()
+      {
+	if (_M_bucket_count != -1)
+	  _M_destroy();
+      }
+
+      _Hashtable_local_iter_base(const _Hashtable_local_iter_base& __iter)
+      : __node_iter_base(__iter._M_cur), _M_bucket(__iter._M_bucket)
+      , _M_bucket_count(__iter._M_bucket_count)
+      {
+	if (_M_bucket_count != -1)
+	  _M_init(*__iter._M_h());
+      }
+
+      _Hashtable_local_iter_base&
+      operator=(const _Hashtable_local_iter_base& __iter)
+      {
+	if (_M_bucket_count != -1)
+	  _M_destroy();
+	this->_M_cur = __iter._M_cur;
+	_M_bucket = __iter._M_bucket;
+	_M_bucket_count = __iter._M_bucket_count;
+	if (_M_bucket_count != -1)
+	  _M_init(*__iter._M_h());
+	return *this;
+      }
+
+      void
+      _M_incr()
+      {
+	__node_iter_base::_M_incr();
+	if (this->_M_cur)
+	  {
+	    std::size_t __bkt =
+	      _RangeHash{}((*this->_M_h())(_ExtractKey{}(this->_M_cur->_M_v())),
+			   _M_bucket_count);
+	    if (__bkt != _M_bucket)
+	      this->_M_cur = nullptr;
+	  }
+      }
+
+      std::size_t _M_bucket;
+      std::size_t _M_bucket_count;
+
+      void
+      _M_init(const _Hash& __h)
+      { ::new(this->_M_h()) _Hash(__h); }
+
+      void
+      _M_destroy() { this->_M_h()->~_Hash(); }
+
+    public:
+      std::size_t
+      _M_get_bucket() const { return _M_bucket; }  // for debug mode
+    };
 
+#if !_GLIBCXX_INLINE_VERSION
   /// local iterators
   template<typename _Key, typename _Value, typename _ExtractKey,
 	   typename _Hash, typename _RangeHash, typename _Unused,
@@ -1535,6 +1923,127 @@ namespace __detail
 	return __tmp;
       }
     };
+#endif
+
+  /// local iterators
+  template<typename _Key, typename _NodeType, typename _ExtractKey,
+	   typename _Hash, typename _RangeHash, typename _Unused,
+	   bool __constant_iterators, bool __cache>
+    struct _Hashtable_local_iterator
+    : public _Hashtable_local_iter_base<_Key, _NodeType, _ExtractKey,
+					_Hash, _RangeHash, _Unused, __cache>
+    {
+    private:
+      using __base_type =
+	_Hashtable_local_iter_base<_Key, _NodeType, _ExtractKey,
+				   _Hash, _RangeHash, _Unused, __cache>;
+      using __hash_code_base = typename __base_type::__hash_code_base;
+      using __node_type = typename __base_type::__node_type;
+
+    public:
+      typedef typename __base_type::value_type		value_type;
+      typedef typename std::conditional<__constant_iterators,
+					const value_type*, value_type*>::type
+							pointer;
+      typedef typename std::conditional<__constant_iterators,
+					const value_type&, value_type&>::type
+							reference;
+      typedef typename __node_type::difference_type	difference_type;
+      typedef std::forward_iterator_tag			iterator_category;
+
+      _Hashtable_local_iterator() = default;
+
+      _Hashtable_local_iterator(const __hash_code_base& __base,
+				__node_type* __n,
+				std::size_t __bkt, std::size_t __bkt_count)
+      : __base_type(__base, __n, __bkt, __bkt_count)
+      { }
+
+      reference
+      operator*() const
+      { return this->_M_cur->_M_v(); }
+
+      pointer
+      operator->() const
+      { return this->_M_cur->_M_valptr(); }
+
+      _Hashtable_local_iterator&
+      operator++()
+      {
+	this->_M_incr();
+	return *this;
+      }
+
+      _Hashtable_local_iterator
+      operator++(int)
+      {
+	_Hashtable_local_iterator __tmp(*this);
+	this->_M_incr();
+	return __tmp;
+      }
+    };
+
+  /// local const_iterators
+  template<typename _Key, typename _NodeType, typename _ExtractKey,
+	   typename _Hash, typename _RangeHash, typename _Unused,
+	   bool __constant_iterators, bool __cache>
+    struct _Hashtable_const_local_iterator
+    : public _Hashtable_local_iter_base<_Key, _NodeType, _ExtractKey,
+					_Hash, _RangeHash, _Unused, __cache>
+    {
+    private:
+      using __base_type =
+	_Hashtable_local_iter_base<_Key, _NodeType, _ExtractKey,
+				   _Hash, _RangeHash, _Unused, __cache>;
+      using __hash_code_base = typename __base_type::__hash_code_base;
+      using __node_type = typename __base_type::__node_type;
+
+    public:
+      typedef typename __base_type::value_type		value_type;
+      typedef const value_type*				pointer;
+      typedef const value_type&				reference;
+      typedef typename __node_type::difference_type	difference_type;
+      typedef std::forward_iterator_tag			iterator_category;
+
+      _Hashtable_const_local_iterator() = default;
+
+      _Hashtable_const_local_iterator(const __hash_code_base& __base,
+				      __node_type* __n,
+				    std::size_t __bkt, std::size_t __bkt_count)
+      : __base_type(__base, __n, __bkt, __bkt_count)
+      { }
+
+      _Hashtable_const_local_iterator(const _Hashtable_local_iterator<
+				      _Key, _NodeType, _ExtractKey,
+				      _Hash, _RangeHash, _Unused,
+				      __constant_iterators,
+				      __cache>& __x)
+      : __base_type(__x)
+      { }
+
+      reference
+      operator*() const
+      { return this->_M_cur->_M_v(); }
+
+      pointer
+      operator->() const
+      { return this->_M_cur->_M_valptr(); }
+
+      _Hashtable_const_local_iterator&
+      operator++()
+      {
+	this->_M_incr();
+	return *this;
+      }
+
+      _Hashtable_const_local_iterator
+      operator++(int)
+      {
+	_Hashtable_const_local_iterator __tmp(*this);
+	this->_M_incr();
+	return __tmp;
+      }
+    };
 
   /**
    *  Primary class template _Hashtable_base.
@@ -1597,6 +2106,7 @@ namespace __detail
       : __hash_code_base(__hash), _EqualEBO(__eq)
       { }
 
+#if !_GLIBCXX_INLINE_VERSION
       bool
       _M_equals(const _Key& __k, __hash_code __c,
 		const _Hash_node_value<_Value, __hash_cached::value>& __n) const
@@ -1606,7 +2116,19 @@ namespace __detail
 	  "key type");
 	return _S_equals(__c, __n) && _M_eq()(__k, _ExtractKey{}(__n._M_v()));
       }
+#endif
+
+      bool
+      _M_equals(const _Key& __k, __hash_code __c,
+		const _Hash_node_cache_value<_Value, __hash_cached::value>& __n) const
+      {
+	static_assert(__is_invocable<const _Equal&, const _Key&, const _Key&>{},
+	  "key equality predicate must be invocable with two arguments of "
+	  "key type");
+	return _S_equals(__c, __n) && _M_eq()(__k, _ExtractKey{}(__n._M_v()));
+      }
 
+#if !_GLIBCXX_INLINE_VERSION
       bool
       _M_node_equals(
 	const _Hash_node_value<_Value, __hash_cached::value>& __lhn,
@@ -1615,6 +2137,16 @@ namespace __detail
 	return _S_node_equals(__lhn, __rhn)
 	  && _M_eq()(_ExtractKey{}(__lhn._M_v()), _ExtractKey{}(__rhn._M_v()));
       }
+#endif
+
+      bool
+      _M_node_equals(
+	const _Hash_node_cache_value<_Value, __hash_cached::value>& __lhn,
+	const _Hash_node_cache_value<_Value, __hash_cached::value>& __rhn) const
+      {
+	return _S_node_equals(__lhn, __rhn)
+	  && _M_eq()(_ExtractKey{}(__lhn._M_v()), _ExtractKey{}(__rhn._M_v()));
+      }
 
       void
       _M_swap(_Hashtable_base& __x)
@@ -1679,8 +2211,9 @@ namespace __detail
 	  if (!__prev_n)
 	    return false;
 
-	  for (__node_type* __n = static_cast<__node_type*>(__prev_n->_M_nxt);;
-	       __n = __n->_M_next())
+	  __node_type* __n =
+	    static_cast<__node_type*>(std::__to_address(__prev_n->_M_nxt));
+	  for (;; __n = __n->_M_next())
 	    {
 	      if (__n->_M_v() == *__itx)
 		break;
@@ -1739,7 +2272,8 @@ namespace __detail
 	  if (!__y_prev_n)
 	    return false;
 
-	  __node_type* __y_n = static_cast<__node_type*>(__y_prev_n->_M_nxt);
+	  __node_type* __y_n =
+	    static_cast<__node_type*>(std::__to_address(__y_prev_n->_M_nxt));
 	  for (;;)
 	    {
 	      if (__this->key_eq()(_ExtractKey{}(__y_n->_M_v()),
@@ -1786,16 +2320,12 @@ namespace __detail
       // Use __gnu_cxx to benefit from _S_always_equal and al.
       using __node_alloc_traits = __gnu_cxx::__alloc_traits<__node_alloc_type>;
 
-      using __value_alloc_traits = typename __node_alloc_traits::template
-	rebind_traits<typename __node_type::value_type>;
-
-      using __node_ptr = __node_type*;
-      using __node_base = _Hash_node_base;
-      using __node_base_ptr = __node_base*;
+      using __node_ptr = typename __node_alloc_traits::pointer;
+      using __node_base = typename __node_type::__node_base;
       using __buckets_alloc_type =
-	__alloc_rebind<__node_alloc_type, __node_base_ptr>;
+	__alloc_rebind<__node_alloc_type, __node_base*>;
       using __buckets_alloc_traits = std::allocator_traits<__buckets_alloc_type>;
-      using __buckets_ptr = __node_base_ptr*;
+      using __buckets_ptr = typename __buckets_alloc_traits::pointer;
 
       _Hashtable_alloc() = default;
       _Hashtable_alloc(const _Hashtable_alloc&) = default;
@@ -1848,14 +2378,13 @@ namespace __detail
       -> __node_ptr
       {
 	auto __nptr = __node_alloc_traits::allocate(_M_node_allocator(), 1);
-	__node_ptr __n = std::__to_address(__nptr);
 	__try
 	  {
-	    ::new ((void*)__n) __node_type;
+	    ::new ((void*)std::__to_address(__nptr)) __node_type;
 	    __node_alloc_traits::construct(_M_node_allocator(),
-					   __n->_M_valptr(),
+					   __nptr->_M_valptr(),
 					   std::forward<_Args>(__args)...);
-	    return __n;
+	    return __nptr;
 	  }
 	__catch(...)
 	  {
@@ -1866,20 +2395,18 @@ namespace __detail
 
   template<typename _NodeAlloc>
     void
-    _Hashtable_alloc<_NodeAlloc>::_M_deallocate_node(__node_ptr __n)
+    _Hashtable_alloc<_NodeAlloc>::_M_deallocate_node(__node_ptr __nptr)
     {
-      __node_alloc_traits::destroy(_M_node_allocator(), __n->_M_valptr());
-      _M_deallocate_node_ptr(__n);
+      __node_alloc_traits::destroy(_M_node_allocator(), __nptr->_M_valptr());
+      _M_deallocate_node_ptr(__nptr);
     }
 
   template<typename _NodeAlloc>
     void
-    _Hashtable_alloc<_NodeAlloc>::_M_deallocate_node_ptr(__node_ptr __n)
+    _Hashtable_alloc<_NodeAlloc>::_M_deallocate_node_ptr(__node_ptr __nptr)
     {
-      typedef typename __node_alloc_traits::pointer _Ptr;
-      auto __ptr = std::pointer_traits<_Ptr>::pointer_to(*__n);
-      __n->~__node_type();
-      __node_alloc_traits::deallocate(_M_node_allocator(), __ptr, 1);
+      __nptr->~__node_type();
+      __node_alloc_traits::deallocate(_M_node_allocator(), __nptr, 1);
     }
 
   template<typename _NodeAlloc>
@@ -1889,7 +2416,7 @@ namespace __detail
       while (__n)
 	{
 	  __node_ptr __tmp = __n;
-	  __n = __n->_M_next();
+	  __n = __n->_M_next_ptr();
 	  _M_deallocate_node(__tmp);
 	}
     }
@@ -1902,9 +2429,9 @@ namespace __detail
       __buckets_alloc_type __alloc(_M_node_allocator());
 
       auto __ptr = __buckets_alloc_traits::allocate(__alloc, __bkt_count);
-      __buckets_ptr __p = std::__to_address(__ptr);
-      __builtin_memset(__p, 0, __bkt_count * sizeof(__node_base_ptr));
-      return __p;
+      __builtin_memset(std::__to_address(__ptr), 0,
+		       __bkt_count * sizeof(__node_base*));
+      return __ptr;
     }
 
   template<typename _NodeAlloc>
@@ -1913,10 +2440,8 @@ namespace __detail
     _M_deallocate_buckets(__buckets_ptr __bkts,
 			  std::size_t __bkt_count)
     {
-      typedef typename __buckets_alloc_traits::pointer _Ptr;
-      auto __ptr = std::pointer_traits<_Ptr>::pointer_to(*__bkts);
       __buckets_alloc_type __alloc(_M_node_allocator());
-      __buckets_alloc_traits::deallocate(__alloc, __ptr, __bkt_count);
+      __buckets_alloc_traits::deallocate(__alloc, __bkts, __bkt_count);
     }
 
  //@} hashtable-detail
diff --git a/libstdc++-v3/testsuite/23_containers/unordered_map/allocator/ext_ptr.cc b/libstdc++-v3/testsuite/23_containers/unordered_map/allocator/ext_ptr.cc
new file mode 100644
index 00000000000..e9d7ada7151
--- /dev/null
+++ b/libstdc++-v3/testsuite/23_containers/unordered_map/allocator/ext_ptr.cc
@@ -0,0 +1,57 @@
+// Copyright (C) 2020 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// <http://www.gnu.org/licenses/>.
+
+// { dg-do run { target c++11 } }
+
+#include <unordered_map>
+#include <memory>
+#include <testsuite_hooks.h>
+#include <testsuite_allocator.h>
+
+struct T { int i; };
+
+bool operator==(const T& l, const T& r)
+{ return l.i == r.i; }
+
+struct H
+{
+  std::size_t
+  operator()(const T& t) const noexcept
+  { return t.i; }
+};
+
+struct E : std::equal_to<T>
+{ };
+
+using __gnu_test::CustomPointerAlloc;
+
+template class std::unordered_map<T, int, H, E,
+				  CustomPointerAlloc<std::pair<const T, int>>>;
+
+void test01()
+{
+  typedef CustomPointerAlloc<std::pair<const T, int>> alloc_type;
+  typedef std::unordered_map<T, int, H, E, alloc_type> test_type;
+  test_type v;
+  v.insert({ T(), 0 });
+  VERIFY( ++v.begin() == v.end() );
+}
+
+int main()
+{
+  test01();
+}
diff --git a/libstdc++-v3/testsuite/23_containers/unordered_multimap/allocator/ext_ptr.cc b/libstdc++-v3/testsuite/23_containers/unordered_multimap/allocator/ext_ptr.cc
new file mode 100644
index 00000000000..4a895a6302c
--- /dev/null
+++ b/libstdc++-v3/testsuite/23_containers/unordered_multimap/allocator/ext_ptr.cc
@@ -0,0 +1,57 @@
+// Copyright (C) 2020 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// <http://www.gnu.org/licenses/>.
+
+// { dg-do run { target c++11 } }
+
+#include <unordered_map>
+#include <memory>
+#include <testsuite_hooks.h>
+#include <testsuite_allocator.h>
+
+struct T { int i; };
+
+bool operator==(const T& l, const T& r)
+{ return l.i == r.i; }
+
+struct H
+{
+  std::size_t
+  operator()(const T& t) const noexcept
+  { return t.i; }
+};
+
+struct E : std::equal_to<T>
+{ };
+
+using __gnu_test::CustomPointerAlloc;
+
+template class std::unordered_multimap<T, int, H, E,
+				       CustomPointerAlloc<std::pair<const T, int>>>;
+
+void test01()
+{
+  typedef CustomPointerAlloc<std::pair<const T, int>> alloc_type;
+  typedef std::unordered_multimap<T, int, H, E, alloc_type> test_type;
+  test_type v;
+  v.insert({ T(), 0 });
+  VERIFY( ++v.begin() == v.end() );
+}
+
+int main()
+{
+  test01();
+}
diff --git a/libstdc++-v3/testsuite/23_containers/unordered_multiset/allocator/ext_ptr.cc b/libstdc++-v3/testsuite/23_containers/unordered_multiset/allocator/ext_ptr.cc
new file mode 100644
index 00000000000..36b5e10cc7b
--- /dev/null
+++ b/libstdc++-v3/testsuite/23_containers/unordered_multiset/allocator/ext_ptr.cc
@@ -0,0 +1,56 @@
+// Copyright (C) 2020 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// <http://www.gnu.org/licenses/>.
+
+// { dg-do run { target c++11 } }
+
+#include <unordered_set>
+#include <memory>
+#include <testsuite_hooks.h>
+#include <testsuite_allocator.h>
+
+struct T { int i; };
+
+bool operator==(const T& l, const T& r)
+{ return l.i == r.i; }
+
+struct H
+{
+  std::size_t
+  operator()(const T& t) const noexcept
+  { return t.i; }
+};
+
+struct E : std::equal_to<T>
+{ };
+
+using __gnu_test::CustomPointerAlloc;
+
+template class std::unordered_multiset<T, H, E, CustomPointerAlloc<T>>;
+
+void test01()
+{
+  typedef CustomPointerAlloc<T> alloc_type;
+  typedef std::unordered_multiset<T, H, E, alloc_type> test_type;
+  test_type v;
+  v.insert(T());
+  VERIFY( ++v.begin() == v.end() );
+}
+
+int main()
+{
+  test01();
+}
diff --git a/libstdc++-v3/testsuite/23_containers/unordered_set/allocator/ext_ptr.cc b/libstdc++-v3/testsuite/23_containers/unordered_set/allocator/ext_ptr.cc
index f6b908ac03e..479104709fb 100644
--- a/libstdc++-v3/testsuite/23_containers/unordered_set/allocator/ext_ptr.cc
+++ b/libstdc++-v3/testsuite/23_containers/unordered_set/allocator/ext_ptr.cc
@@ -15,10 +15,7 @@
 // with this library; see the file COPYING3.  If not see
 // <http://www.gnu.org/licenses/>.
 
-// This test fails to compile since C++17 (see xfail-if below) so we can only
-// do a "run" test for C++11 and C++14, and a "compile" test for C++17 and up.
-// { dg-do run { target { c++11_only || c++14_only } } }
-// { dg-do compile { target c++17 } }
+// { dg-do run { target { c++11 } } }
 
 #include <unordered_set>
 #include <memory>
@@ -26,15 +23,22 @@
 #include <testsuite_allocator.h>
 
 struct T { int i; };
-bool operator==(const T& l, const T& r) { return l.i == r.i; }
-struct H { std::size_t operator()(const T& t) const noexcept { return t.i; }
+
+bool operator==(const T& l, const T& r)
+{ return l.i == r.i; }
+
+struct H
+{
+  std::size_t
+  operator()(const T& t) const noexcept
+  { return t.i; }
 };
-struct E : std::equal_to<T> { };
+
+struct E : std::equal_to<T>
+{ };
 
 using __gnu_test::CustomPointerAlloc;
 
-// { dg-xfail-if "node reinsertion assumes raw pointers" { c++17 } }
-// TODO when removing this xfail change the test back to "dg-do run".
 template class std::unordered_set<T, H, E, CustomPointerAlloc<T>>;
 
 void test01()

  parent reply	other threads:[~2020-11-01 21:48 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-19 17:31 François Dumont
2020-05-15 21:12 ` François Dumont
2020-09-28 20:37   ` François Dumont
2020-10-20 11:04     ` Jonathan Wakely
2020-10-20 17:26       ` François Dumont
2020-11-01 21:48       ` François Dumont [this message]
2020-11-02 14:11         ` Jonathan Wakely
2020-11-02 21:33           ` François Dumont
2021-01-11 18:10           ` François Dumont
2021-06-10 17:22           ` François Dumont

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=528bdf1d-ed96-5d9d-89db-e1bf57268e24@gmail.com \
    --to=frs.dumont@gmail.com \
    --cc=jwakely@redhat.com \
    --cc=libstdc++@gcc.gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).