From: "François Dumont" <frs.dumont@gmail.com>
To: "libstdc++@gcc.gnu.org" <libstdc++@gcc.gnu.org>
Subject: Re: libstdc++ PR 57272 Fancy pointer support in Hashtable
Date: Mon, 28 Sep 2020 22:37:35 +0200 [thread overview]
Message-ID: <c2806a4d-055f-613f-cd4a-6aba073651bc@gmail.com> (raw)
In-Reply-To: <ffb24f52-2423-bb8c-b608-d93f396d01ba@gmail.com>
[-- Attachment #1: Type: text/plain, Size: 5956 bytes --]
Following recent changes on _Hashtable I rebase the patch and completely
review it.
I managed to integrate the allocator custom pointer type without
touching to _Hashtable base types like _Hash_code_base or
_Hashtable_base. However I cannot see how to use the custom pointer type
without impacting the node types like _Hash_node_base which now takes a
template parameter, the custom pointer type.
On an abi point of view node types are different however the data
structure is the same. The only difference is that the _Hash_node_base
_M_nxt is now a _Hash_node<> custom pointer rather than a simple
_Hash_node_base*.
Even if this patch can't go in because of the abi breaking change I am
going to adapt some of the code simplifications for master. Especially
the _Hash_code_base and _Local_iterator_base simplifications.
Let me know if you can think of a way to integrate the custom pointer
without impacting abi. Unless impacting node types and associated
iterator types is fine even if I already noticed that pretty printer
tests are broken with those changes.
François
On 15/05/20 11:12 pm, François Dumont wrote:
> I think I completed this evolution.
>
> I eventually used ref to node pointer as much as possible and even use
> move semantic on it.
>
> My prerequisite for this to work is that nullptr can be assign on the
> fancy pointer and that a fancy pointer to __node_type is assignable
> implicitely to a fancy pointer to __node_base.
>
> * include/bits/hashtable_policy.h (_Hashtable_base): Add _Alloc
> template parameter.
> (_ReuseOrAllocNode<>::__node_type): Remove.
> (_ReuseOrAllocNode<>::__node_pointer): New.
> (_ReuseOrAllocNode(__node_pointer, __hashtable_alloc&)): Adapt
> to use
> latter.
> (_ReuseOrAllocNode<>::operator()(_Arg&&)): Return latter.
> (_AllocNode<>::__node_type): Remove.
> (_AllocNode<>::__node_pointer): New.
> (_AllocNode<>::operator()<>(_Arg&&)): Return latter.
> (_Hash_node_base<>): Add _NodePtr template parameter.
> (_Hash_node_value_base<>): Likewise.
> (_Hash_node<>): Add _Ptr template parameter.
> (_Hash_node<>::_M_next()): Remove.
> (_Node_iterator_base<>): Use _NodePtr template parameter.
> (operator==(const _Node_iterator_base&, const
> _Node_iterator_base&)):
> Make inline friend.
> (operator!=(const _Node_iterator_base&, const
> _Node_iterator_base&)):
> Likewise.
> (_Node_iterator<>): Use _NodePtr template parameter.
> (_Node_const_iterator<>): Use _NodePtr template parameter.
> (_Map_base<>::__node_type): Remove.
> (_Map_base<>::__node_pointer): New.
> (_Hash_code_base<>): Add _Alloc template parameter.
> (_Hash_code_base<>::__pointer): New.
> (_Hash_code_base<>::__node_pointer): New.
> (_Hash_code_base<>::__node_ptr_arg_t): New.
> (_Local_iterator_base<>): Add _Alloc template parameter.
> Inherit from
> _Node_iterator_base<>.
> (_Local_iterator_base<>::__base_node_iter): New.
> (_Local_iterator_base<>::_M_cur): Remove.
> (_Local_iterator_base<>::_M_incr()): Adapt.
> (_Local_iterator_base<>::_M_curr()): Remove.
> (operator==(const _Local_iterator_base<>&,
> const _Local_iterator_base<>&)): Remove.
> (operator!=(const _Local_iterator_base<>&,
> const _Local_iterator_base<>&)): Remove.
> (_Local_iterator<>): Add _Alloc and _NodePtr template parameters.
> (_Local_const_iterator<>): Likewise.
> (_Hashtable_base<>): Add _Alloc template parameter.
> (_Hashtable_alloc<>::__node_pointer): New.
> (_Hashtable_alloc<>::__bucket_pointer): New.
> (_Hashtable_alloc<>::_M_allocate_node): Adapt.
> (_Hashtable_alloc<>::_M_deallocate_node): Adapt.
> (_Hashtable_alloc<>::_M_deallocate_node_ptr): Adapt.
> (_Hashtable_alloc<>::_M_deallocate_nodes): Adapt.
> (_Hashtable_alloc<>::_M_allocate_buckets): Adapt.
> (_Hashtable_alloc<>::_M_deallocate_buckets): Adapt.
> * include/bits/hashtable.h (_Hashtable<>): Adapt.
> (_Hashtable<>::_M_begin()): Remove.
> * include/debug/unordered_map: Adapt.
> * include/debug/unordered_set: Adapt.
> * testsuite/23_containers/unordered_map/allocator/ext_ptr.cc:
> New.
> *
> testsuite/23_containers/unordered_multimap/allocator/ext_ptr.cc: New.
> *
> testsuite/23_containers/unordered_multiset/allocator/ext_ptr.cc: New.
> * testsuite/23_containers/unordered_set/allocator/ext_ptr.cc
>
> Tested under Linux x86_64.
>
> Ok to commit ?
>
> François
>
> On 19/04/20 7:31 pm, François Dumont wrote:
>> Here is my work in progress to use allocator pointer type. This type
>> is used both as the node pointer and as the buckets pointer.
>>
>> Rather than adapting _Local_iterator_base like _Node_iterator_base I
>> prefer to just make it inherits from _Node_iterator_base. It
>> simplifies its implementation and avoids to provided dedicated
>> comparison operators.
>>
>> Now I wonder if I need to consider Phil Bouchard comment regarding
>> how node pointers are being passed, either by value or reference. I
>> already chose to pass them as rvalue references in some occasions and
>> even lvalue reference like in _M_bucket_index method. Do you think I
>> need to continue this way ? Maybe I should use some conditional type,
>> if raw pointer we pass by value and otherwise we pass by ref ?
>>
>> François
>>
>
[-- Attachment #2: hashtable_ext_ptr.patch --]
[-- Type: text/x-patch, Size: 88755 bytes --]
diff --git a/libstdc++-v3/include/bits/hashtable.h b/libstdc++-v3/include/bits/hashtable.h
index 07a4abe5c33..ea3411dec5f 100644
--- a/libstdc++-v3/include/bits/hashtable.h
+++ b/libstdc++-v3/include/bits/hashtable.h
@@ -101,7 +101,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
* - size_type _M_bucket_count
* - size_type _M_element_count
*
- * with _Bucket being _Hash_node* and _Hash_node containing:
+ * with _Bucket being _Hash_node_base* and _Hash_node containing:
*
* - _Hash_node* _M_next
* - Tp _M_value
@@ -182,8 +182,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_RehashPolicy, _Traits>,
private __detail::_Hashtable_alloc<
__alloc_rebind<_Alloc,
- __detail::_Hash_node<_Value,
- _Traits::__hash_cached::value>>>
+ __detail::_Hash_node<
+#if __cplusplus > 201703L || defined __STRICT_ANSI__
+ typename std::allocator_traits<_Alloc>::pointer,
+#else
+ typename std::allocator_traits<
+ __alloc_rebind<_Alloc, _Value>>::pointer,
+#endif
+ _Traits::__hash_cached::value>>>
{
static_assert(is_same<typename remove_cv<_Value>::type, _Value>::value,
"unordered container must have a non-const, non-volatile value_type");
@@ -194,17 +200,40 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
using __traits_type = _Traits;
using __hash_cached = typename __traits_type::__hash_cached;
- using __node_type = __detail::_Hash_node<_Value, __hash_cached::value>;
- using __node_alloc_type = __alloc_rebind<_Alloc, __node_type>;
-
- using __hashtable_alloc = __detail::_Hashtable_alloc<__node_alloc_type>;
+ using __constant_iterators = typename __traits_type::__constant_iterators;
+ using __unique_keys = typename __traits_type::__unique_keys;
+ using __hashtable_alloc = __detail::_Hashtable_alloc<
+ __alloc_rebind<_Alloc,
+ __detail::_Hash_node<
+#if __cplusplus > 201703L || defined __STRICT_ANSI__
+ typename std::allocator_traits<_Alloc>::pointer,
+#else
+ typename std::allocator_traits<
+ __alloc_rebind<_Alloc, _Value>>::pointer,
+#endif
+ __hash_cached::value>>>;
+
+ using __node_value_type =
+ __detail::_Hash_node_value<_Value, __hash_cached::value>;
+ using __node_type = typename __hashtable_alloc::__node_type;
+ using __node_pointer = typename __hashtable_alloc::__node_pointer;
+ using __node_alloc_type =
+ typename __hashtable_alloc::__node_alloc_type;
using __value_alloc_traits =
typename __hashtable_alloc::__value_alloc_traits;
using __node_alloc_traits =
typename __hashtable_alloc::__node_alloc_traits;
using __node_base = typename __hashtable_alloc::__node_base;
- using __bucket_type = typename __hashtable_alloc::__bucket_type;
+ using __node_base_ptr = typename __hashtable_alloc::__node_base_ptr;
+ using __buckets_pointer = typename __hashtable_alloc::__buckets_pointer;
+ using __bucket_ptr_traits = std::pointer_traits<__buckets_pointer>;
+ using __node_base_ptr_traits = std::pointer_traits<__node_base_ptr>;
+
+ using __insert_base = __detail::_Insert<_Key, _Value, _Alloc, _ExtractKey,
+ _Equal, _Hash,
+ _RangeHash, _Unused,
+ _RehashPolicy, _Traits>;
public:
typedef _Key key_type;
@@ -219,20 +248,32 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
typedef value_type& reference;
typedef const value_type& const_reference;
+ using iterator = typename __insert_base::iterator;
+
+ using const_iterator = typename __insert_base::const_iterator;
+
+ using local_iterator = __detail::_Local_iterator<key_type, __node_pointer,
+ _ExtractKey, _Hash, _RangeHash, _Unused,
+ __constant_iterators::value,
+ __hash_cached::value>;
+
+ using const_local_iterator = __detail::_Local_const_iterator<
+ key_type, __node_pointer,
+ _ExtractKey, _Hash, _RangeHash, _Unused,
+ __constant_iterators::value, __hash_cached::value>;
+
private:
using __rehash_type = _RehashPolicy;
using __rehash_state = typename __rehash_type::_State;
- using __constant_iterators = typename __traits_type::__constant_iterators;
- using __unique_keys = typename __traits_type::__unique_keys;
-
using __hashtable_base = __detail::
_Hashtable_base<_Key, _Value, _ExtractKey,
_Equal, _Hash, _RangeHash, _Unused, _Traits>;
using __hash_code_base = typename __hashtable_base::__hash_code_base;
using __hash_code = typename __hashtable_base::__hash_code;
- using __ireturn_type = typename __hashtable_base::__ireturn_type;
+
+ using __ireturn_type = typename __insert_base::__ireturn_type;
using __map_base = __detail::_Map_base<_Key, _Value, _Alloc, _ExtractKey,
_Equal, _Hash, _RangeHash, _Unused,
@@ -256,7 +297,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
struct _Scoped_node
{
// Take ownership of a node with a constructed element.
- _Scoped_node(__node_type* __n, __hashtable_alloc* __h)
+ _Scoped_node(__node_pointer __n, __hashtable_alloc* __h)
: _M_h(__h), _M_node(__n) { }
// Allocate a node and construct an element within it.
@@ -273,7 +314,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_Scoped_node& operator=(const _Scoped_node&) = delete;
__hashtable_alloc* _M_h;
- __node_type* _M_node;
+ __node_pointer _M_node;
};
template<typename _Ht>
@@ -293,8 +334,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// Getting a bucket index from a node shall not throw because it is used
// in methods (erase, swap...) that shall not throw.
static_assert(noexcept(declval<const __hash_code_base_access&>()
- ._M_bucket_index((const __node_type*)nullptr,
- (std::size_t)0)),
+ ._M_bucket_index(declval<const __node_value_type&>(),
+ (std::size_t)0)),
"Cache the hash code or qualify your functors involved"
" in hash code and bucket index computation with noexcept");
@@ -345,20 +386,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
using size_type = typename __hashtable_base::size_type;
using difference_type = typename __hashtable_base::difference_type;
- using iterator = typename __hashtable_base::iterator;
- using const_iterator = typename __hashtable_base::const_iterator;
-
- using local_iterator = typename __hashtable_base::local_iterator;
- using const_local_iterator = typename __hashtable_base::
- const_local_iterator;
-
#if __cplusplus > 201402L
using node_type = _Node_handle<_Key, _Value, __node_alloc_type>;
using insert_return_type = _Node_insert_return<iterator, node_type>;
#endif
private:
- __bucket_type* _M_buckets = &_M_single_bucket;
+ __buckets_pointer _M_buckets
+ = __bucket_ptr_traits::pointer_to(_M_single_bucket);
size_type _M_bucket_count = 1;
__node_base _M_before_begin;
size_type _M_element_count = 0;
@@ -370,25 +405,29 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// qualified.
// Note that we can't leave hashtable with 0 bucket without adding
// numerous checks in the code to avoid 0 modulus.
- __bucket_type _M_single_bucket = nullptr;
+ __node_base_ptr _M_single_bucket = nullptr;
void
_M_update_bbegin()
{
- if (_M_begin())
- _M_buckets[_M_bucket_index(_M_begin())] = &_M_before_begin;
+ if (_M_before_begin._M_nxt)
+ _M_buckets[_M_bucket_index(*_M_before_begin._M_nxt)] =
+ __node_base_ptr_traits::pointer_to(_M_before_begin);
}
void
- _M_update_bbegin(__node_type* __n)
+ _M_update_bbegin(__node_pointer __n)
{
_M_before_begin._M_nxt = __n;
_M_update_bbegin();
}
bool
- _M_uses_single_bucket(__bucket_type* __bkts) const
- { return __builtin_expect(__bkts == &_M_single_bucket, false); }
+ _M_uses_single_bucket(__buckets_pointer __bkts) const
+ {
+ return __builtin_expect(std::__to_address(__bkts) == &_M_single_bucket,
+ false);
+ }
bool
_M_uses_single_bucket() const
@@ -397,20 +436,20 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
__hashtable_alloc&
_M_base_alloc() { return *this; }
- __bucket_type*
+ __buckets_pointer
_M_allocate_buckets(size_type __bkt_count)
{
if (__builtin_expect(__bkt_count == 1, false))
{
_M_single_bucket = nullptr;
- return &_M_single_bucket;
+ return __bucket_ptr_traits::pointer_to(_M_single_bucket);
}
return __hashtable_alloc::_M_allocate_buckets(__bkt_count);
}
void
- _M_deallocate_buckets(__bucket_type* __bkts, size_type __bkt_count)
+ _M_deallocate_buckets(__buckets_pointer __bkts, size_type __bkt_count)
{
if (_M_uses_single_bucket(__bkts))
return;
@@ -424,13 +463,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// Gets bucket begin, deals with the fact that non-empty buckets contain
// their before begin node.
- __node_type*
+ __node_pointer
_M_bucket_begin(size_type __bkt) const;
- __node_type*
- _M_begin() const
- { return static_cast<__node_type*>(_M_before_begin._M_nxt); }
-
// Assign *this using another _Hashtable instance. Whether elements
// are copied or moved depends on the _Ht reference.
template<typename _Ht>
@@ -552,7 +587,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_Hashtable&
operator=(initializer_list<value_type> __l)
{
- __reuse_or_alloc_node_gen_t __roan(_M_begin(), *this);
+ __reuse_or_alloc_node_gen_t __roan(_M_before_begin._M_nxt,
+ *this);
_M_before_begin._M_nxt = nullptr;
clear();
@@ -577,11 +613,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// Basic container operations
iterator
begin() noexcept
- { return iterator(_M_begin()); }
+ { return iterator(_M_before_begin._M_nxt); }
const_iterator
begin() const noexcept
- { return const_iterator(_M_begin()); }
+ { return const_iterator(_M_before_begin._M_nxt); }
iterator
end() noexcept
@@ -593,7 +629,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
const_iterator
cbegin() const noexcept
- { return const_iterator(_M_begin()); }
+ { return const_iterator(_M_before_begin._M_nxt); }
const_iterator
cend() const noexcept
@@ -642,36 +678,45 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
local_iterator
begin(size_type __bkt)
{
- return local_iterator(*this, _M_bucket_begin(__bkt),
+ return local_iterator(this->hash_function(), _M_bucket_begin(__bkt),
__bkt, _M_bucket_count);
}
local_iterator
end(size_type __bkt)
- { return local_iterator(*this, nullptr, __bkt, _M_bucket_count); }
+ {
+ return local_iterator(this->hash_function(), nullptr,
+ __bkt, _M_bucket_count);
+ }
const_local_iterator
begin(size_type __bkt) const
{
- return const_local_iterator(*this, _M_bucket_begin(__bkt),
+ return const_local_iterator(this->hash_function(), _M_bucket_begin(__bkt),
__bkt, _M_bucket_count);
}
const_local_iterator
end(size_type __bkt) const
- { return const_local_iterator(*this, nullptr, __bkt, _M_bucket_count); }
+ {
+ return const_local_iterator(this->hash_function(), nullptr,
+ __bkt, _M_bucket_count);
+ }
// DR 691.
const_local_iterator
cbegin(size_type __bkt) const
{
- return const_local_iterator(*this, _M_bucket_begin(__bkt),
+ return const_local_iterator(this->hash_function(), _M_bucket_begin(__bkt),
__bkt, _M_bucket_count);
}
const_local_iterator
cend(size_type __bkt) const
- { return const_local_iterator(*this, nullptr, __bkt, _M_bucket_count); }
+ {
+ return const_local_iterator(this->hash_function(), nullptr,
+ __bkt, _M_bucket_count);
+ }
float
load_factor() const noexcept
@@ -711,7 +756,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
private:
// Bucket index computation helpers.
size_type
- _M_bucket_index(__node_type* __n) const noexcept
+ _M_bucket_index(const __node_value_type& __n) const noexcept
{ return __hash_code_base::_M_bucket_index(__n, _M_bucket_count); }
size_type
@@ -720,44 +765,44 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// Find and insert helper functions and types
// Find the node before the one matching the criteria.
- __node_base*
+ __node_base_ptr
_M_find_before_node(size_type, const key_type&, __hash_code) const;
- __node_type*
+ __node_pointer
_M_find_node(size_type __bkt, const key_type& __key,
__hash_code __c) const
{
- __node_base* __before_n = _M_find_before_node(__bkt, __key, __c);
+ __node_base_ptr __before_n = _M_find_before_node(__bkt, __key, __c);
if (__before_n)
- return static_cast<__node_type*>(__before_n->_M_nxt);
+ return __before_n->_M_nxt;
return nullptr;
}
// Insert a node at the beginning of a bucket.
void
- _M_insert_bucket_begin(size_type, __node_type*);
+ _M_insert_bucket_begin(size_type, __node_pointer);
// Remove the bucket first node
void
- _M_remove_bucket_begin(size_type __bkt, __node_type* __next_n,
+ _M_remove_bucket_begin(size_type __bkt, __node_pointer __next_n,
size_type __next_bkt);
// Get the node before __n in the bucket __bkt
- __node_base*
- _M_get_previous_node(size_type __bkt, __node_base* __n);
+ __node_base_ptr
+ _M_get_previous_node(size_type __bkt, __node_pointer __n);
// Insert node __n with hash code __code, in bucket __bkt if no
// rehash (assumes no element with same key already present).
// Takes ownership of __n if insertion succeeds, throws otherwise.
iterator
_M_insert_unique_node(size_type __bkt, __hash_code,
- __node_type* __n, size_type __n_elt = 1);
+ __node_pointer __n, size_type __n_elt = 1);
// Insert node __n with key __k and hash code __code.
// Takes ownership of __n if insertion succeeds, throws otherwise.
iterator
- _M_insert_multi_node(__node_type* __hint,
- __hash_code __code, __node_type* __n);
+ _M_insert_multi_node(__node_pointer __hint,
+ __hash_code __code, __node_pointer __n);
template<typename... _Args>
std::pair<iterator, bool>
@@ -814,7 +859,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_M_erase(false_type __uks, const key_type&);
iterator
- _M_erase(size_type __bkt, __node_base* __prev_n, __node_type* __n);
+ _M_erase(size_type __bkt, __node_base_ptr __prev_n, __node_pointer __n);
public:
// Emplace
@@ -874,7 +919,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
const key_type& __k = __nh._M_key();
__hash_code __code = this->_M_hash_code(__k);
size_type __bkt = _M_bucket_index(__code);
- if (__node_type* __n = _M_find_node(__bkt, __k, __code))
+ if (__node_pointer __n = _M_find_node(__bkt, __k, __code))
{
__ret.node = std::move(__nh);
__ret.position = iterator(__n);
@@ -910,15 +955,15 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
private:
node_type
- _M_extract_node(size_t __bkt, __node_base* __prev_n)
+ _M_extract_node(size_t __bkt, __node_base_ptr __prev_n)
{
- __node_type* __n = static_cast<__node_type*>(__prev_n->_M_nxt);
+ __node_pointer __n = __prev_n->_M_nxt;
if (__prev_n == _M_buckets[__bkt])
- _M_remove_bucket_begin(__bkt, __n->_M_next(),
- __n->_M_nxt ? _M_bucket_index(__n->_M_next()) : 0);
+ _M_remove_bucket_begin(__bkt, __n->_M_nxt,
+ __n->_M_nxt ? _M_bucket_index(*__n->_M_nxt) : 0);
else if (__n->_M_nxt)
{
- size_type __next_bkt = _M_bucket_index(__n->_M_next());
+ size_type __next_bkt = _M_bucket_index(*__n->_M_nxt);
if (__next_bkt != __bkt)
_M_buckets[__next_bkt] = __prev_n;
}
@@ -934,7 +979,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
node_type
extract(const_iterator __pos)
{
- size_t __bkt = _M_bucket_index(__pos._M_cur);
+ size_t __bkt = _M_bucket_index(*__pos._M_cur);
return _M_extract_node(__bkt,
_M_get_previous_node(__bkt, __pos._M_cur));
}
@@ -946,7 +991,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
node_type __nh;
__hash_code __code = this->_M_hash_code(__k);
std::size_t __bkt = _M_bucket_index(__code);
- if (__node_base* __prev_node = _M_find_before_node(__bkt, __k, __code))
+ if (__node_base_ptr __prev_node = _M_find_before_node(__bkt, __k, __code))
__nh = _M_extract_node(__bkt, __prev_node);
return __nh;
}
@@ -1016,10 +1061,10 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
_M_bucket_begin(size_type __bkt) const
- -> __node_type*
+ -> __node_pointer
{
- __node_base* __n = _M_buckets[__bkt];
- return __n ? static_cast<__node_type*>(__n->_M_nxt) : nullptr;
+ __node_base_ptr __n = _M_buckets[__bkt];
+ return __n ? __n->_M_nxt : nullptr;
}
template<typename _Key, typename _Value, typename _Alloc,
@@ -1107,7 +1152,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
&& __this_alloc != __that_alloc)
{
// Replacement allocator cannot free existing storage.
- this->_M_deallocate_nodes(_M_begin());
+ this->_M_deallocate_nodes(_M_before_begin._M_nxt);
_M_before_begin._M_nxt = nullptr;
_M_deallocate_buckets();
_M_buckets = nullptr;
@@ -1148,7 +1193,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
_M_assign_elements(_Ht&& __ht)
{
- __bucket_type* __former_buckets = nullptr;
+ __buckets_pointer __former_buckets = nullptr;
std::size_t __former_bucket_count = _M_bucket_count;
const __rehash_state& __former_state = _M_rehash_policy._M_state();
@@ -1159,15 +1204,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_M_bucket_count = __ht._M_bucket_count;
}
else
- __builtin_memset(_M_buckets, 0,
- _M_bucket_count * sizeof(__bucket_type));
+ std::fill_n(_M_buckets, _M_bucket_count, nullptr);
__try
{
__hashtable_base::operator=(std::forward<_Ht>(__ht));
_M_element_count = __ht._M_element_count;
_M_rehash_policy = __ht._M_rehash_policy;
- __reuse_or_alloc_node_gen_t __roan(_M_begin(), *this);
+ __reuse_or_alloc_node_gen_t __roan(_M_before_begin._M_nxt, *this);
_M_before_begin._M_nxt = nullptr;
_M_assign(std::forward<_Ht>(__ht), __roan);
if (__former_buckets)
@@ -1183,8 +1227,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_M_buckets = __former_buckets;
_M_bucket_count = __former_bucket_count;
}
- __builtin_memset(_M_buckets, 0,
- _M_bucket_count * sizeof(__bucket_type));
+ std::fill_n(_M_buckets, _M_bucket_count, nullptr);
__throw_exception_again;
}
}
@@ -1199,7 +1242,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
_M_assign(_Ht&& __ht, const _NodeGenerator& __node_gen)
{
- __bucket_type* __buckets = nullptr;
+ __buckets_pointer __buckets = nullptr;
if (!_M_buckets)
_M_buckets = __buckets = _M_allocate_buckets(_M_bucket_count);
@@ -1210,20 +1253,20 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// First deal with the special first node pointed to by
// _M_before_begin.
- __node_type* __ht_n = __ht._M_begin();
- __node_type* __this_n
+ __node_pointer __ht_n = __ht._M_before_begin._M_nxt;
+ __node_pointer __this_n
= __node_gen(__fwd_value_for<_Ht>(__ht_n->_M_v()));
- this->_M_copy_code(__this_n, __ht_n);
+ this->_M_copy_code(*__this_n, *__ht_n);
_M_update_bbegin(__this_n);
// Then deal with other nodes.
- __node_base* __prev_n = __this_n;
- for (__ht_n = __ht_n->_M_next(); __ht_n; __ht_n = __ht_n->_M_next())
+ __node_pointer __prev_n = __this_n;
+ for (__ht_n = __ht_n->_M_nxt; __ht_n; __ht_n = __ht_n->_M_nxt)
{
__this_n = __node_gen(__fwd_value_for<_Ht>(__ht_n->_M_v()));
__prev_n->_M_nxt = __this_n;
- this->_M_copy_code(__this_n, __ht_n);
- size_type __bkt = _M_bucket_index(__this_n);
+ this->_M_copy_code(*__this_n, *__ht_n);
+ size_type __bkt = _M_bucket_index(*__this_n);
if (!_M_buckets[__bkt])
_M_buckets[__bkt] = __prev_n;
__prev_n = __this_n;
@@ -1250,7 +1293,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_M_rehash_policy._M_reset();
_M_bucket_count = 1;
_M_single_bucket = nullptr;
- _M_buckets = &_M_single_bucket;
+ _M_buckets = __bucket_ptr_traits::pointer_to(_M_single_bucket);
_M_before_begin._M_nxt = nullptr;
_M_element_count = 0;
}
@@ -1267,7 +1310,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
if (__builtin_expect(std::__addressof(__ht) == this, false))
return;
- this->_M_deallocate_nodes(_M_begin());
+ this->_M_deallocate_nodes(_M_before_begin._M_nxt);
_M_deallocate_buckets();
__hashtable_base::operator=(std::move(__ht));
_M_rehash_policy = __ht._M_rehash_policy;
@@ -1275,7 +1318,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_M_buckets = __ht._M_buckets;
else
{
- _M_buckets = &_M_single_bucket;
+ _M_buckets = __bucket_ptr_traits::pointer_to(_M_single_bucket);
_M_single_bucket = __ht._M_single_bucket;
}
@@ -1350,9 +1393,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_M_rehash_policy(__ht._M_rehash_policy)
{
// Update buckets if __ht is using its single bucket.
- if (__ht._M_uses_single_bucket())
+ if (std::__to_address(_M_buckets) == &__ht._M_single_bucket)
{
- _M_buckets = &_M_single_bucket;
+ _M_buckets = __bucket_ptr_traits::pointer_to(_M_single_bucket);
_M_single_bucket = __ht._M_single_bucket;
}
@@ -1373,7 +1416,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
__map_base(__ht),
__rehash_base(__ht),
__hashtable_alloc(__node_alloc_type(__a)),
- _M_buckets(),
+ _M_buckets(nullptr),
_M_bucket_count(__ht._M_bucket_count),
_M_element_count(__ht._M_element_count),
_M_rehash_policy(__ht._M_rehash_policy)
@@ -1403,7 +1446,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
{
if (__ht._M_uses_single_bucket())
{
- _M_buckets = &_M_single_bucket;
+ _M_buckets = __bucket_ptr_traits::pointer_to(_M_single_bucket);
_M_single_bucket = __ht._M_single_bucket;
}
else
@@ -1411,7 +1454,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// Fix bucket containing the _M_before_begin pointer that can't be
// moved.
- _M_update_bbegin(__ht._M_begin());
+ _M_update_bbegin(__ht._M_before_begin._M_nxt);
__ht._M_reset();
}
@@ -1464,13 +1507,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
if (!__x._M_uses_single_bucket())
{
_M_buckets = __x._M_buckets;
- __x._M_buckets = &__x._M_single_bucket;
+ __x._M_buckets =
+ __bucket_ptr_traits::pointer_to(__x._M_single_bucket);
}
}
else if (__x._M_uses_single_bucket())
{
__x._M_buckets = _M_buckets;
- _M_buckets = &_M_single_bucket;
+ _M_buckets = __bucket_ptr_traits::pointer_to(_M_single_bucket);
}
else
std::swap(_M_buckets, __x._M_buckets);
@@ -1538,7 +1582,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// find any new equivalent value.
size_type __result = 1;
for (auto __ref = __it++;
- __it._M_cur && this->_M_node_equals(__ref._M_cur, __it._M_cur);
+ __it._M_cur && this->_M_node_equals(*__ref._M_cur, *__it._M_cur);
++__it)
++__result;
@@ -1566,7 +1610,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// All equivalent values are next to each other, if we find a
// non-equivalent value after an equivalent one it means that we won't
// find any new equivalent value.
- while (__ite._M_cur && this->_M_node_equals(__beg._M_cur, __ite._M_cur))
+ while (__ite._M_cur && this->_M_node_equals(*__beg._M_cur, *__ite._M_cur))
++__ite;
return { __beg, __ite };
@@ -1593,7 +1637,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// All equivalent values are next to each other, if we find a
// non-equivalent value after an equivalent one it means that we won't
// find any new equivalent value.
- while (__ite._M_cur && this->_M_node_equals(__beg._M_cur, __ite._M_cur))
+ while (__ite._M_cur && this->_M_node_equals(*__beg._M_cur, *__ite._M_cur))
++__ite;
return { __beg, __ite };
@@ -1610,19 +1654,18 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
_M_find_before_node(size_type __bkt, const key_type& __k,
__hash_code __code) const
- -> __node_base*
+ -> __node_base_ptr
{
- __node_base* __prev_p = _M_buckets[__bkt];
+ __node_base_ptr __prev_p = _M_buckets[__bkt];
if (!__prev_p)
return nullptr;
- for (__node_type* __p = static_cast<__node_type*>(__prev_p->_M_nxt);;
- __p = __p->_M_next())
+ for (__node_pointer __p = __prev_p->_M_nxt;; __p = __p->_M_nxt)
{
- if (this->_M_equals(__k, __code, __p))
+ if (this->_M_equals(__k, __code, *__p))
return __prev_p;
- if (!__p->_M_nxt || _M_bucket_index(__p->_M_next()) != __bkt)
+ if (!__p->_M_nxt || _M_bucket_index(*__p->_M_nxt) != __bkt)
break;
__prev_p = __p;
}
@@ -1637,7 +1680,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
void
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
- _M_insert_bucket_begin(size_type __bkt, __node_type* __node)
+ _M_insert_bucket_begin(size_type __bkt, __node_pointer __node)
{
if (_M_buckets[__bkt])
{
@@ -1657,9 +1700,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
if (__node->_M_nxt)
// We must update former begin bucket that is pointing to
// _M_before_begin.
- _M_buckets[_M_bucket_index(__node->_M_next())] = __node;
-
- _M_buckets[__bkt] = &_M_before_begin;
+ _M_buckets[_M_bucket_index(*__node->_M_nxt)] = __node;
+ _M_buckets[__bkt] =
+ __node_base_ptr_traits::pointer_to(_M_before_begin);
}
}
@@ -1670,7 +1713,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
void
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
- _M_remove_bucket_begin(size_type __bkt, __node_type* __next,
+ _M_remove_bucket_begin(size_type __bkt, __node_pointer __next,
size_type __next_bkt)
{
if (!__next || __next_bkt != __bkt)
@@ -1681,7 +1724,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_M_buckets[__next_bkt] = _M_buckets[__bkt];
// Second update before begin node if necessary
- if (&_M_before_begin == _M_buckets[__bkt])
+ if (&_M_before_begin == std::__to_address(_M_buckets[__bkt]))
_M_before_begin._M_nxt = __next;
_M_buckets[__bkt] = nullptr;
}
@@ -1694,10 +1737,10 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
- _M_get_previous_node(size_type __bkt, __node_base* __n)
- -> __node_base*
+ _M_get_previous_node(size_type __bkt, __node_pointer __n)
+ -> __node_base_ptr
{
- __node_base* __prev_n = _M_buckets[__bkt];
+ __node_base_ptr __prev_n = _M_buckets[__bkt];
while (__prev_n->_M_nxt != __n)
__prev_n = __prev_n->_M_nxt;
return __prev_n;
@@ -1719,7 +1762,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
const key_type& __k = _ExtractKey{}(__node._M_node->_M_v());
__hash_code __code = this->_M_hash_code(__k);
size_type __bkt = _M_bucket_index(__code);
- if (__node_type* __p = _M_find_node(__bkt, __k, __code))
+ if (__node_pointer __p = _M_find_node(__bkt, __k, __code))
// There is already an equivalent node, no insertion
return std::make_pair(iterator(__p), false);
@@ -1760,7 +1803,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
_M_insert_unique_node(size_type __bkt, __hash_code __code,
- __node_type* __node, size_type __n_elt)
+ __node_pointer __node, size_type __n_elt)
-> iterator
{
const __rehash_state& __saved_state = _M_rehash_policy._M_state();
@@ -1774,7 +1817,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
__bkt = _M_bucket_index(__code);
}
- this->_M_store_code(__node, __code);
+ this->_M_store_code(*__node, __code);
// Always insert at the beginning of the bucket.
_M_insert_bucket_begin(__bkt, __node);
@@ -1789,8 +1832,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
- _M_insert_multi_node(__node_type* __hint,
- __hash_code __code, __node_type* __node)
+ _M_insert_multi_node(__node_pointer __hint,
+ __hash_code __code, __node_pointer __node)
-> iterator
{
const __rehash_state& __saved_state = _M_rehash_policy._M_state();
@@ -1800,17 +1843,19 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
if (__do_rehash.first)
_M_rehash(__do_rehash.second, __saved_state);
- this->_M_store_code(__node, __code);
+ this->_M_store_code(*__node, __code);
const key_type& __k = _ExtractKey{}(__node->_M_v());
size_type __bkt = _M_bucket_index(__code);
// Find the node before an equivalent one or use hint if it exists and
// if it is equivalent.
- __node_base* __prev
- = __builtin_expect(__hint != nullptr, false)
- && this->_M_equals(__k, __code, __hint)
- ? __hint
- : _M_find_before_node(__bkt, __k, __code);
+ __node_base_ptr __prev;
+ if (__builtin_expect(__hint != nullptr, false)
+ && this->_M_equals(__k, __code, *__hint))
+ __prev = __hint;
+ else
+ __prev = _M_find_before_node(__bkt, __k, __code);
+
if (__prev)
{
// Insert after the node before the equivalent one.
@@ -1820,9 +1865,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// hint might be the last bucket node, in this case we need to
// update next bucket.
if (__node->_M_nxt
- && !this->_M_equals(__k, __code, __node->_M_next()))
+ && !this->_M_equals(__k, __code, *__node->_M_nxt))
{
- size_type __next_bkt = _M_bucket_index(__node->_M_next());
+ size_type __next_bkt = _M_bucket_index(*__node->_M_nxt);
if (__next_bkt != __bkt)
_M_buckets[__next_bkt] = __node;
}
@@ -1853,7 +1898,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
__hash_code __code = this->_M_hash_code(__k);
size_type __bkt = _M_bucket_index(__code);
- if (__node_type* __node = _M_find_node(__bkt, __k, __code))
+ if (__node_pointer __node = _M_find_node(__bkt, __k, __code))
return { iterator(__node), false };
_Scoped_node __node{ __node_gen(std::forward<_Arg>(__v)), this };
@@ -1899,14 +1944,13 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
erase(const_iterator __it)
-> iterator
{
- __node_type* __n = __it._M_cur;
- std::size_t __bkt = _M_bucket_index(__n);
+ std::size_t __bkt = _M_bucket_index(*__it._M_cur);
// Look for previous node to unlink it from the erased one, this
// is why we need buckets to contain the before begin to make
// this search fast.
- __node_base* __prev_n = _M_get_previous_node(__bkt, __n);
- return _M_erase(__bkt, __prev_n, __n);
+ __node_base_ptr __prev_n = _M_get_previous_node(__bkt, __it._M_cur);
+ return _M_erase(__bkt, __prev_n, __it._M_cur);
}
template<typename _Key, typename _Value, typename _Alloc,
@@ -1916,21 +1960,21 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
- _M_erase(size_type __bkt, __node_base* __prev_n, __node_type* __n)
+ _M_erase(size_type __bkt, __node_base_ptr __prev_n, __node_pointer __n)
-> iterator
{
if (__prev_n == _M_buckets[__bkt])
- _M_remove_bucket_begin(__bkt, __n->_M_next(),
- __n->_M_nxt ? _M_bucket_index(__n->_M_next()) : 0);
+ _M_remove_bucket_begin(__bkt, __n->_M_nxt,
+ __n->_M_nxt ? _M_bucket_index(*__n->_M_nxt) : 0);
else if (__n->_M_nxt)
{
- size_type __next_bkt = _M_bucket_index(__n->_M_next());
+ size_type __next_bkt = _M_bucket_index(*__n->_M_nxt);
if (__next_bkt != __bkt)
_M_buckets[__next_bkt] = __prev_n;
}
__prev_n->_M_nxt = __n->_M_nxt;
- iterator __result(__n->_M_next());
+ iterator __result(__n->_M_nxt);
this->_M_deallocate_node(__n);
--_M_element_count;
@@ -1951,13 +1995,12 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
std::size_t __bkt = _M_bucket_index(__code);
// Look for the node before the first matching node.
- __node_base* __prev_n = _M_find_before_node(__bkt, __k, __code);
+ __node_base_ptr __prev_n = _M_find_before_node(__bkt, __k, __code);
if (!__prev_n)
return 0;
// We found a matching node, erase it.
- __node_type* __n = static_cast<__node_type*>(__prev_n->_M_nxt);
- _M_erase(__bkt, __prev_n, __n);
+ _M_erase(__bkt, __prev_n, __prev_n->_M_nxt);
return 1;
}
@@ -1975,7 +2018,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
std::size_t __bkt = _M_bucket_index(__code);
// Look for the node before the first matching node.
- __node_base* __prev_n = _M_find_before_node(__bkt, __k, __code);
+ __node_base_ptr __prev_n = _M_find_before_node(__bkt, __k, __code);
if (!__prev_n)
return 0;
@@ -1985,18 +2028,18 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// We use one loop to find all matching nodes and another to deallocate
// them so that the key stays valid during the first loop. It might be
// invalidated indirectly when destroying nodes.
- __node_type* __n = static_cast<__node_type*>(__prev_n->_M_nxt);
- __node_type* __n_last = __n->_M_next();
- while (__n_last && this->_M_node_equals(__n, __n_last))
- __n_last = __n_last->_M_next();
+ __node_pointer __n = __prev_n->_M_nxt;
+ __node_pointer __n_last = __n->_M_nxt;
+ while (__n_last && this->_M_node_equals(*__n, *__n_last))
+ __n_last = __n_last->_M_nxt;
- std::size_t __n_last_bkt = __n_last ? _M_bucket_index(__n_last) : __bkt;
+ std::size_t __n_last_bkt = __n_last ? _M_bucket_index(*__n_last) : __bkt;
// Deallocate nodes.
size_type __result = 0;
do
{
- __node_type* __p = __n->_M_next();
+ __node_pointer __p = __n->_M_nxt;
this->_M_deallocate_node(__n);
__n = __p;
++__result;
@@ -2022,27 +2065,27 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
erase(const_iterator __first, const_iterator __last)
-> iterator
{
- __node_type* __n = __first._M_cur;
- __node_type* __last_n = __last._M_cur;
+ __node_pointer __n = __first._M_cur;
+ __node_pointer __last_n = __last._M_cur;
if (__n == __last_n)
return iterator(__n);
- std::size_t __bkt = _M_bucket_index(__n);
+ std::size_t __bkt = _M_bucket_index(*__n);
- __node_base* __prev_n = _M_get_previous_node(__bkt, __n);
+ __node_base_ptr __prev_n = _M_get_previous_node(__bkt, __n);
bool __is_bucket_begin = __n == _M_bucket_begin(__bkt);
std::size_t __n_bkt = __bkt;
for (;;)
{
do
{
- __node_type* __tmp = __n;
- __n = __n->_M_next();
+ __node_pointer __tmp = __n;
+ __n = __n->_M_nxt;
this->_M_deallocate_node(__tmp);
--_M_element_count;
if (!__n)
break;
- __n_bkt = _M_bucket_index(__n);
+ __n_bkt = _M_bucket_index(*__n);
}
while (__n != __last_n && __n_bkt == __bkt);
if (__is_bucket_begin)
@@ -2068,8 +2111,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
clear() noexcept
{
- this->_M_deallocate_nodes(_M_begin());
- __builtin_memset(_M_buckets, 0, _M_bucket_count * sizeof(__bucket_type));
+ this->_M_deallocate_nodes(_M_before_begin._M_nxt);
+ std::fill_n(_M_buckets, _M_bucket_count, nullptr);
_M_element_count = 0;
_M_before_begin._M_nxt = nullptr;
}
@@ -2129,20 +2172,22 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
_M_rehash_aux(size_type __bkt_count, true_type /* __uks */)
{
- __bucket_type* __new_buckets = _M_allocate_buckets(__bkt_count);
- __node_type* __p = _M_begin();
+ __buckets_pointer __new_buckets = _M_allocate_buckets(__bkt_count);
+ auto __before_begin_ptr =
+ __node_base_ptr_traits::pointer_to(_M_before_begin);
+ __node_pointer __p = _M_before_begin._M_nxt;
_M_before_begin._M_nxt = nullptr;
std::size_t __bbegin_bkt = 0;
while (__p)
{
- __node_type* __next = __p->_M_next();
+ __node_pointer __next = __p->_M_nxt;
std::size_t __bkt
- = __hash_code_base::_M_bucket_index(__p, __bkt_count);
+ = __hash_code_base::_M_bucket_index(*__p, __bkt_count);
if (!__new_buckets[__bkt])
{
__p->_M_nxt = _M_before_begin._M_nxt;
_M_before_begin._M_nxt = __p;
- __new_buckets[__bkt] = &_M_before_begin;
+ __new_buckets[__bkt] = __before_begin_ptr;
if (__p->_M_nxt)
__new_buckets[__bbegin_bkt] = __p;
__bbegin_bkt = __bkt;
@@ -2172,20 +2217,21 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
_M_rehash_aux(size_type __bkt_count, false_type /* __uks */)
{
- __bucket_type* __new_buckets = _M_allocate_buckets(__bkt_count);
-
- __node_type* __p = _M_begin();
+ auto __new_buckets = _M_allocate_buckets(__bkt_count);
+ auto __before_begin_ptr =
+ __node_base_ptr_traits::pointer_to(_M_before_begin);
+ __node_pointer __p = _M_before_begin._M_nxt;
_M_before_begin._M_nxt = nullptr;
std::size_t __bbegin_bkt = 0;
std::size_t __prev_bkt = 0;
- __node_type* __prev_p = nullptr;
+ __node_pointer __prev_p{};
bool __check_bucket = false;
while (__p)
{
- __node_type* __next = __p->_M_next();
+ __node_pointer __next = __p->_M_nxt;
std::size_t __bkt
- = __hash_code_base::_M_bucket_index(__p, __bkt_count);
+ = __hash_code_base::_M_bucket_index(*__p, __bkt_count);
if (__prev_p && __prev_bkt == __bkt)
{
@@ -2211,7 +2257,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
if (__prev_p->_M_nxt)
{
std::size_t __next_bkt
- = __hash_code_base::_M_bucket_index(__prev_p->_M_next(),
+ = __hash_code_base::_M_bucket_index(*__prev_p->_M_nxt,
__bkt_count);
if (__next_bkt != __prev_bkt)
__new_buckets[__next_bkt] = __prev_p;
@@ -2223,7 +2269,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
{
__p->_M_nxt = _M_before_begin._M_nxt;
_M_before_begin._M_nxt = __p;
- __new_buckets[__bkt] = &_M_before_begin;
+ __new_buckets[__bkt] = __before_begin_ptr;
if (__p->_M_nxt)
__new_buckets[__bbegin_bkt] = __p;
__bbegin_bkt = __bkt;
@@ -2242,7 +2288,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
if (__check_bucket && __prev_p->_M_nxt)
{
std::size_t __next_bkt
- = __hash_code_base::_M_bucket_index(__prev_p->_M_next(),
+ = __hash_code_base::_M_bucket_index(*__prev_p->_M_nxt,
__bkt_count);
if (__next_bkt != __prev_bkt)
__new_buckets[__next_bkt] = __prev_p;
diff --git a/libstdc++-v3/include/bits/hashtable_policy.h b/libstdc++-v3/include/bits/hashtable_policy.h
index 0109ef86a7b..3f5faa7dba8 100644
--- a/libstdc++-v3/include/bits/hashtable_policy.h
+++ b/libstdc++-v3/include/bits/hashtable_policy.h
@@ -107,10 +107,10 @@ namespace __detail
using __hashtable_alloc = _Hashtable_alloc<__node_alloc_type>;
using __node_alloc_traits =
typename __hashtable_alloc::__node_alloc_traits;
- using __node_type = typename __hashtable_alloc::__node_type;
+ using __node_pointer = typename __hashtable_alloc::__node_pointer;
public:
- _ReuseOrAllocNode(__node_type* __nodes, __hashtable_alloc& __h)
+ _ReuseOrAllocNode(__node_pointer __nodes, __hashtable_alloc& __h)
: _M_nodes(__nodes), _M_h(__h) { }
_ReuseOrAllocNode(const _ReuseOrAllocNode&) = delete;
@@ -118,13 +118,13 @@ namespace __detail
{ _M_h._M_deallocate_nodes(_M_nodes); }
template<typename _Arg>
- __node_type*
+ __node_pointer
operator()(_Arg&& __arg) const
{
if (_M_nodes)
{
- __node_type* __node = _M_nodes;
- _M_nodes = _M_nodes->_M_next();
+ __node_pointer __node = _M_nodes;
+ _M_nodes = _M_nodes->_M_nxt;
__node->_M_nxt = nullptr;
auto& __a = _M_h._M_node_allocator();
__node_alloc_traits::destroy(__a, __node->_M_valptr());
@@ -144,7 +144,7 @@ namespace __detail
}
private:
- mutable __node_type* _M_nodes;
+ mutable __node_pointer _M_nodes;
__hashtable_alloc& _M_h;
};
@@ -155,14 +155,14 @@ namespace __detail
{
private:
using __hashtable_alloc = _Hashtable_alloc<_NodeAlloc>;
- using __node_type = typename __hashtable_alloc::__node_type;
+ using __node_pointer = typename __hashtable_alloc::__node_pointer;
public:
_AllocNode(__hashtable_alloc& __h)
: _M_h(__h) { }
template<typename _Arg>
- __node_type*
+ __node_pointer
operator()(_Arg&& __arg) const
{ return _M_h._M_allocate_node(std::forward<_Arg>(__arg)); }
@@ -211,14 +211,17 @@ namespace __detail
* nodes also store a hash code. In some cases (e.g. strings) this
* may be a performance win.
*/
- struct _Hash_node_base
- {
- _Hash_node_base* _M_nxt;
+ template<typename _NodePtr>
+ struct _Hash_node_base
+ {
+ using __node_pointer = _NodePtr;
- _Hash_node_base() noexcept : _M_nxt() { }
+ __node_pointer _M_nxt;
- _Hash_node_base(_Hash_node_base* __next) noexcept : _M_nxt(__next) { }
- };
+ _Hash_node_base() noexcept : _M_nxt() { }
+
+ _Hash_node_base(__node_pointer __next) noexcept : _M_nxt(__next) { }
+ };
/**
* struct _Hash_node_value_base
@@ -226,7 +229,7 @@ namespace __detail
* Node type with the value to store.
*/
template<typename _Value>
- struct _Hash_node_value_base : _Hash_node_base
+ struct _Hash_node_value_base
{
typedef _Value value_type;
@@ -250,54 +253,49 @@ namespace __detail
};
/**
- * Primary template struct _Hash_node.
+ * Primary template struct _Hash_node_code_cache.
*/
- template<typename _Value, bool _Cache_hash_code>
- struct _Hash_node;
+ template<bool _Cache_hash_code>
+ struct _Hash_node_code_cache
+ { };
/**
- * Specialization for nodes with caches, struct _Hash_node.
- *
- * Base class is __detail::_Hash_node_value_base.
+ * Specialization for node with cache, struct _Hash_node_code_cache.
*/
- template<typename _Value>
- struct _Hash_node<_Value, true> : _Hash_node_value_base<_Value>
- {
- std::size_t _M_hash_code;
+ template<>
+ struct _Hash_node_code_cache<true>
+ { std::size_t _M_hash_code; };
- _Hash_node*
- _M_next() const noexcept
- { return static_cast<_Hash_node*>(this->_M_nxt); }
- };
+ template<typename _Value, bool _Cache_hash_code>
+ struct _Hash_node_value
+ : _Hash_node_value_base<_Value>
+ , _Hash_node_code_cache<_Cache_hash_code>
+ { };
/**
- * Specialization for nodes without caches, struct _Hash_node.
- *
- * Base class is __detail::_Hash_node_value_base.
+ * Primary template struct _Hash_node.
*/
- template<typename _Value>
- struct _Hash_node<_Value, false> : _Hash_node_value_base<_Value>
- {
- _Hash_node*
- _M_next() const noexcept
- { return static_cast<_Hash_node*>(this->_M_nxt); }
- };
+ template<typename _Ptr, bool _Cache_hash_code>
+ struct _Hash_node
+ : _Hash_node_base<__ptr_rebind<_Ptr,
+ _Hash_node<_Ptr, _Cache_hash_code>>>
+ , _Hash_node_value<typename std::pointer_traits<_Ptr>::element_type,
+ _Cache_hash_code>
+ { };
/// Base class for node iterators.
- template<typename _Value, bool _Cache_hash_code>
+ template<typename _NodePtr>
struct _Node_iterator_base
{
- using __node_type = _Hash_node<_Value, _Cache_hash_code>;
-
- __node_type* _M_cur;
+ _NodePtr _M_cur;
_Node_iterator_base() = default;
- _Node_iterator_base(__node_type* __p) noexcept
+ _Node_iterator_base(_NodePtr __p) noexcept
: _M_cur(__p) { }
void
_M_incr() noexcept
- { _M_cur = _M_cur->_M_next(); }
+ { _M_cur = _M_cur->_M_nxt; }
friend bool
operator==(const _Node_iterator_base& __x, const _Node_iterator_base& __y)
@@ -313,30 +311,31 @@ namespace __detail
};
/// Node iterators, used to iterate through all the hashtable.
- template<typename _Value, bool __constant_iterators, bool __cache>
+ template<typename _NodePtr, bool __constant_iterators>
struct _Node_iterator
- : public _Node_iterator_base<_Value, __cache>
+ : public _Node_iterator_base<_NodePtr>
{
private:
- using __base_type = _Node_iterator_base<_Value, __cache>;
- using __node_type = typename __base_type::__node_type;
+ using __base_type = _Node_iterator_base<_NodePtr>;
+ using __node_type = typename std::pointer_traits<_NodePtr>::element_type;
public:
- typedef _Value value_type;
- typedef std::ptrdiff_t difference_type;
+ typedef typename __node_type::value_type value_type;
+ typedef typename std::pointer_traits<_NodePtr>::difference_type
+ difference_type;
typedef std::forward_iterator_tag iterator_category;
using pointer = typename std::conditional<__constant_iterators,
- const _Value*, _Value*>::type;
+ const value_type*, value_type*>::type;
using reference = typename std::conditional<__constant_iterators,
- const _Value&, _Value&>::type;
+ const value_type&, value_type&>::type;
_Node_iterator() noexcept
: __base_type(nullptr) { }
explicit
- _Node_iterator(__node_type* __p) noexcept
+ _Node_iterator(_NodePtr __p) noexcept
: __base_type(__p) { }
reference
@@ -364,31 +363,32 @@ namespace __detail
};
/// Node const_iterators, used to iterate through all the hashtable.
- template<typename _Value, bool __constant_iterators, bool __cache>
+ template<typename _NodePtr, bool __constant_iterators>
struct _Node_const_iterator
- : public _Node_iterator_base<_Value, __cache>
+ : public _Node_iterator_base<_NodePtr>
{
private:
- using __base_type = _Node_iterator_base<_Value, __cache>;
- using __node_type = typename __base_type::__node_type;
+ using __base_type = _Node_iterator_base<_NodePtr>;
+ using __node_type = typename std::pointer_traits<_NodePtr>::element_type;
public:
- typedef _Value value_type;
- typedef std::ptrdiff_t difference_type;
+ typedef typename __node_type::value_type value_type;
+ typedef typename std::pointer_traits<_NodePtr>::difference_type
+ difference_type;
typedef std::forward_iterator_tag iterator_category;
- typedef const _Value* pointer;
- typedef const _Value& reference;
+ typedef const value_type* pointer;
+ typedef const value_type& reference;
_Node_const_iterator() noexcept
: __base_type(nullptr) { }
explicit
- _Node_const_iterator(__node_type* __p) noexcept
+ _Node_const_iterator(_NodePtr __p) noexcept
: __base_type(__p) { }
- _Node_const_iterator(const _Node_iterator<_Value, __constant_iterators,
- __cache>& __x) noexcept
+ _Node_const_iterator(const _Node_iterator<_NodePtr,
+ __constant_iterators>& __x) noexcept
: __base_type(__x._M_cur) { }
reference
@@ -670,11 +670,9 @@ namespace __detail
_Unused, _RehashPolicy, _Traits>;
using __hash_code = typename __hashtable_base::__hash_code;
- using __node_type = typename __hashtable_base::__node_type;
public:
using key_type = typename __hashtable_base::key_type;
- using iterator = typename __hashtable_base::iterator;
using mapped_type = typename std::tuple_element<1, _Pair>::type;
mapped_type&
@@ -704,7 +702,7 @@ namespace __detail
__hashtable* __h = static_cast<__hashtable*>(this);
__hash_code __code = __h->_M_hash_code(__k);
std::size_t __bkt = __h->_M_bucket_index(__code);
- if (__node_type* __node = __h->_M_find_node(__bkt, __k, __code))
+ if (auto __node = __h->_M_find_node(__bkt, __k, __code))
return __node->_M_v().second;
typename __hashtable::_Scoped_node __node {
@@ -731,7 +729,7 @@ namespace __detail
__hashtable* __h = static_cast<__hashtable*>(this);
__hash_code __code = __h->_M_hash_code(__k);
std::size_t __bkt = __h->_M_bucket_index(__code);
- if (__node_type* __node = __h->_M_find_node(__bkt, __k, __code))
+ if (auto __node = __h->_M_find_node(__bkt, __k, __code))
return __node->_M_v().second;
typename __hashtable::_Scoped_node __node {
@@ -800,15 +798,25 @@ namespace __detail
_Hash, _RangeHash,
_Unused, _RehashPolicy, _Traits>;
+ using __hash_cached = typename _Traits::__hash_cached;
+ using __constant_iterators = typename _Traits::__constant_iterators;
+ using __unique_keys = typename _Traits::__unique_keys;
+
+ using __hashtable_alloc = _Hashtable_alloc<
+ __alloc_rebind<_Alloc, _Hash_node<
+#if __cplusplus > 201703L || defined __STRICT_ANSI__
+ typename std::allocator_traits<_Alloc>::pointer,
+#else
+ typename std::allocator_traits<
+ __alloc_rebind<_Alloc, _Value>>::pointer,
+#endif
+ __hash_cached::value>>>;
+
using value_type = typename __hashtable_base::value_type;
- using iterator = typename __hashtable_base::iterator;
- using const_iterator = typename __hashtable_base::const_iterator;
using size_type = typename __hashtable_base::size_type;
- using __unique_keys = typename __hashtable_base::__unique_keys;
- using __ireturn_type = typename __hashtable_base::__ireturn_type;
- using __node_type = _Hash_node<_Value, _Traits::__hash_cached::value>;
- using __node_alloc_type = __alloc_rebind<_Alloc, __node_type>;
+ using __node_pointer = typename __hashtable_alloc::__node_pointer;
+ using __node_alloc_type = typename __hashtable_alloc::__node_alloc_type;
using __node_gen_type = _AllocNode<__node_alloc_type>;
__hashtable&
@@ -826,6 +834,16 @@ namespace __detail
const _NodeGetter&, false_type __uks);
public:
+ using iterator = _Node_iterator<__node_pointer,
+ __constant_iterators::value>;
+
+ using const_iterator = _Node_const_iterator<__node_pointer,
+ __constant_iterators::value>;
+
+ using __ireturn_type = typename std::conditional<__unique_keys::value,
+ std::pair<iterator, bool>,
+ iterator>::type;
+
__ireturn_type
insert(const value_type& __v)
{
@@ -849,7 +867,7 @@ namespace __detail
__hashtable& __h = _M_conjure_hashtable();
auto __code = __h._M_hash_code(__k);
std::size_t __bkt = __h._M_bucket_index(__code);
- if (__node_type* __node = __h._M_find_node(__bkt, __k, __code))
+ if (__node_pointer __node = __h._M_find_node(__bkt, __k, __code))
return { iterator(__node), false };
typename __hashtable::_Scoped_node __node {
@@ -957,16 +975,12 @@ namespace __detail
_Equal, _Hash, _RangeHash, _Unused,
_RehashPolicy, _Traits>;
- using __hashtable_base = _Hashtable_base<_Key, _Value, _ExtractKey,
- _Equal, _Hash, _RangeHash,
- _Unused, _Traits>;
-
using value_type = typename __base_type::value_type;
using iterator = typename __base_type::iterator;
using const_iterator = typename __base_type::const_iterator;
+ using __ireturn_type = typename __base_type::__ireturn_type;
using __unique_keys = typename __base_type::__unique_keys;
- using __ireturn_type = typename __hashtable_base::__ireturn_type;
using __hashtable = typename __base_type::__hashtable;
using __node_gen_type = typename __base_type::__node_gen_type;
@@ -1153,7 +1167,7 @@ namespace __detail
* Base class for local iterators, used to iterate within a bucket
* but not between buckets.
*/
- template<typename _Key, typename _Value, typename _ExtractKey,
+ template<typename _Key, typename _NodePtr, typename _ExtractKey,
typename _Hash, typename _RangeHash, typename _Unused,
bool __cache_hash_code>
struct _Local_iterator_base;
@@ -1175,30 +1189,16 @@ namespace __detail
* is inherited in some cases by the _Local_iterator_base type used
* to implement local_iterator and const_local_iterator. As with
* any iterator type we prefer to make it as small as possible.
- *
- * Primary template is unused except as a hook for specializations.
*/
template<typename _Key, typename _Value, typename _ExtractKey,
typename _Hash, typename _RangeHash, typename _Unused,
bool __cache_hash_code>
- struct _Hash_code_base;
-
- /// Specialization: hash function and range-hashing function, no
- /// caching of hash codes.
- /// Provides typedef and accessor required by C++ 11.
- template<typename _Key, typename _Value, typename _ExtractKey,
- typename _Hash, typename _RangeHash, typename _Unused>
- struct _Hash_code_base<_Key, _Value, _ExtractKey, _Hash, _RangeHash,
- _Unused, false>
+ struct _Hash_code_base
: private _Hashtable_ebo_helper<1, _Hash>
{
private:
using __ebo_hash = _Hashtable_ebo_helper<1, _Hash>;
- // Gives the local iterator implementation access to _M_bucket_index().
- friend struct _Local_iterator_base<_Key, _Value, _ExtractKey,
- _Hash, _RangeHash, _Unused, false>;
-
public:
typedef _Hash hasher;
@@ -1208,7 +1208,6 @@ namespace __detail
protected:
typedef std::size_t __hash_code;
- typedef _Hash_node<_Value, false> __node_type;
// We need the default constructor for the local iterators and _Hashtable
// default constructor.
@@ -1228,83 +1227,40 @@ namespace __detail
{ return _RangeHash{}(__c, __bkt_count); }
std::size_t
- _M_bucket_index(const __node_type* __p, std::size_t __bkt_count) const
+ _M_bucket_index(const _Hash_node_value<_Value, false>& __n,
+ std::size_t __bkt_count) const
noexcept( noexcept(declval<const _Hash&>()(declval<const _Key&>()))
&& noexcept(declval<const _RangeHash&>()((__hash_code)0,
(std::size_t)0)) )
{
- return _RangeHash{}(_M_hash()(_ExtractKey{}(__p->_M_v())),
+ return _RangeHash{}(_M_hash_code(_ExtractKey{}(__n._M_v())),
__bkt_count);
}
- void
- _M_store_code(__node_type*, __hash_code) const
- { }
+ std::size_t
+ _M_bucket_index(const _Hash_node_value<_Value, true>& __n,
+ std::size_t __bkt_count) const
+ noexcept( noexcept(declval<const _RangeHash&>()((__hash_code)0,
+ (std::size_t)0)) )
+ { return _RangeHash{}(__n._M_hash_code, __bkt_count); }
void
- _M_copy_code(__node_type*, const __node_type*) const
+ _M_store_code(_Hash_node_code_cache<false>&, __hash_code) const
{ }
void
- _M_swap(_Hash_code_base& __x)
- { std::swap(__ebo_hash::_M_get(), __x.__ebo_hash::_M_get()); }
-
- const _Hash&
- _M_hash() const { return __ebo_hash::_M_cget(); }
- };
-
- /// Specialization: hash function and range-hashing function,
- /// caching hash codes. H is provided but ignored. Provides
- /// typedef and accessor required by C++ 11.
- template<typename _Key, typename _Value, typename _ExtractKey,
- typename _Hash, typename _RangeHash, typename _Unused>
- struct _Hash_code_base<_Key, _Value, _ExtractKey, _Hash, _RangeHash,
- _Unused, true>
- : private _Hashtable_ebo_helper<1, _Hash>
- {
- private:
- using __ebo_hash = _Hashtable_ebo_helper<1, _Hash>;
-
- public:
- typedef _Hash hasher;
-
- hasher
- hash_function() const
- { return _M_hash(); }
-
- protected:
- typedef std::size_t __hash_code;
- typedef _Hash_node<_Value, true> __node_type;
-
- // We need the default constructor for _Hashtable default constructor.
- _Hash_code_base() = default;
- _Hash_code_base(const _Hash& __hash) : __ebo_hash(__hash) { }
-
- __hash_code
- _M_hash_code(const _Key& __k) const
- {
- static_assert(__is_invocable<const _Hash&, const _Key&>{},
- "hash function must be invocable with an argument of key type");
- return _M_hash()(__k);
- }
-
- std::size_t
- _M_bucket_index(__hash_code __c, std::size_t __bkt_count) const
- { return _RangeHash{}(__c, __bkt_count); }
-
- std::size_t
- _M_bucket_index(const __node_type* __p, std::size_t __bkt_count) const
- noexcept( noexcept(declval<const _RangeHash&>()((__hash_code)0,
- (std::size_t)0)) )
- { return _RangeHash{}(__p->_M_hash_code, __bkt_count); }
+ _M_copy_code(_Hash_node_code_cache<false>&,
+ const _Hash_node_code_cache<false>&) const
+ { }
void
- _M_store_code(__node_type* __n, __hash_code __c) const
- { __n->_M_hash_code = __c; }
+ _M_store_code(_Hash_node_code_cache<true>& __n, __hash_code __c) const
+ { __n._M_hash_code = __c; }
void
- _M_copy_code(__node_type* __to, const __node_type* __from) const
- { __to->_M_hash_code = __from->_M_hash_code; }
+ _M_copy_code(_Hash_node_code_cache<true>& __to,
+ const _Hash_node_code_cache<true>& __from) const
+ { __to._M_hash_code = __from._M_hash_code; }
void
_M_swap(_Hash_code_base& __x)
@@ -1315,20 +1271,17 @@ namespace __detail
};
/// Partial specialization used when nodes contain a cached hash code.
- template<typename _Key, typename _Value, typename _ExtractKey,
+ template<typename _Key, typename _NodePtr, typename _ExtractKey,
typename _Hash, typename _RangeHash, typename _Unused>
- struct _Local_iterator_base<_Key, _Value, _ExtractKey,
+ struct _Local_iterator_base<_Key, _NodePtr, _ExtractKey,
_Hash, _RangeHash, _Unused, true>
- : public _Node_iterator_base<_Value, true>
+ : public _Node_iterator_base<_NodePtr>
{
protected:
- using __base_node_iter = _Node_iterator_base<_Value, true>;
- using __hash_code_base = _Hash_code_base<_Key, _Value, _ExtractKey,
- _Hash, _RangeHash, _Unused, true>;
+ using __base_node_iter = _Node_iterator_base<_NodePtr>;
_Local_iterator_base() = default;
- _Local_iterator_base(const __hash_code_base&,
- _Hash_node<_Value, true>* __p,
+ _Local_iterator_base(const _Hash&, _NodePtr __p,
std::size_t __bkt, std::size_t __bkt_count)
: __base_node_iter(__p), _M_bucket(__bkt), _M_bucket_count(__bkt_count)
{ }
@@ -1354,12 +1307,12 @@ namespace __detail
_M_get_bucket() const { return _M_bucket; } // for debug mode
};
- // Uninitialized storage for a _Hash_code_base.
+ // Uninitialized storage for a _Hash.
// This type is DefaultConstructible and Assignable even if the
- // _Hash_code_base type isn't, so that _Local_iterator_base<..., false>
+ // _Hash type isn't, so that _Local_iterator_base<..., false>
// can be DefaultConstructible and Assignable.
template<typename _Tp, bool _IsEmpty = std::is_empty<_Tp>::value>
- struct _Hash_code_storage
+ struct _Hash_storage
{
__gnu_cxx::__aligned_buffer<_Tp> _M_storage;
@@ -1370,9 +1323,9 @@ namespace __detail
_M_h() const { return _M_storage._M_ptr(); }
};
- // Empty partial specialization for empty _Hash_code_base types.
+ // Empty partial specialization for empty _Hash types.
template<typename _Tp>
- struct _Hash_code_storage<_Tp, true>
+ struct _Hash_storage<_Tp, true>
{
static_assert( std::is_empty<_Tp>::value, "Type must be empty" );
@@ -1385,33 +1338,23 @@ namespace __detail
_M_h() const { return reinterpret_cast<const _Tp*>(this); }
};
- template<typename _Key, typename _Value, typename _ExtractKey,
- typename _Hash, typename _RangeHash, typename _Unused>
- using __hash_code_for_local_iter
- = _Hash_code_storage<_Hash_code_base<_Key, _Value, _ExtractKey,
- _Hash, _RangeHash, _Unused, false>>;
-
// Partial specialization used when hash codes are not cached
- template<typename _Key, typename _Value, typename _ExtractKey,
+ template<typename _Key, typename _NodePtr, typename _ExtractKey,
typename _Hash, typename _RangeHash, typename _Unused>
- struct _Local_iterator_base<_Key, _Value, _ExtractKey,
+ struct _Local_iterator_base<_Key, _NodePtr, _ExtractKey,
_Hash, _RangeHash, _Unused, false>
- : __hash_code_for_local_iter<_Key, _Value, _ExtractKey, _Hash, _RangeHash,
- _Unused>
- , _Node_iterator_base<_Value, false>
+ : _Hash_storage<_Hash>
+ , _Node_iterator_base<_NodePtr>
{
protected:
- using __hash_code_base = _Hash_code_base<_Key, _Value, _ExtractKey,
- _Hash, _RangeHash, _Unused, false>;
- using __node_iter_base = _Node_iterator_base<_Value, false>;
+ using __node_iter_base = _Node_iterator_base<_NodePtr>;
_Local_iterator_base() : _M_bucket_count(-1) { }
- _Local_iterator_base(const __hash_code_base& __base,
- _Hash_node<_Value, false>* __p,
+ _Local_iterator_base(const _Hash& __hasher, _NodePtr __p,
std::size_t __bkt, std::size_t __bkt_count)
: __node_iter_base(__p), _M_bucket(__bkt), _M_bucket_count(__bkt_count)
- { _M_init(__base); }
+ { _M_init(__hasher); }
~_Local_iterator_base()
{
@@ -1420,7 +1363,7 @@ namespace __detail
}
_Local_iterator_base(const _Local_iterator_base& __iter)
- : __node_iter_base(__iter), _M_bucket(__iter._M_bucket)
+ : __node_iter_base(__iter._M_cur), _M_bucket(__iter._M_bucket)
, _M_bucket_count(__iter._M_bucket_count)
{
if (_M_bucket_count != -1)
@@ -1446,8 +1389,9 @@ namespace __detail
__node_iter_base::_M_incr();
if (this->_M_cur)
{
- std::size_t __bkt = this->_M_h()->_M_bucket_index(this->_M_cur,
- _M_bucket_count);
+ std::size_t __bkt =
+ _RangeHash{}((*this->_M_h())(_ExtractKey{}(this->_M_cur->_M_v())),
+ _M_bucket_count);
if (__bkt != _M_bucket)
this->_M_cur = nullptr;
}
@@ -1457,11 +1401,11 @@ namespace __detail
std::size_t _M_bucket_count;
void
- _M_init(const __hash_code_base& __base)
- { ::new(this->_M_h()) __hash_code_base(__base); }
+ _M_init(const _Hash& __hasher)
+ { ::new(this->_M_h()) _Hash(__hasher); }
void
- _M_destroy() { this->_M_h()->~__hash_code_base(); }
+ _M_destroy() { this->_M_h()->~_Hash(); }
public:
std::size_t
@@ -1469,35 +1413,35 @@ namespace __detail
};
/// local iterators
- template<typename _Key, typename _Value, typename _ExtractKey,
+ template<typename _Key, typename _NodePtr, typename _ExtractKey,
typename _Hash, typename _RangeHash, typename _Unused,
bool __constant_iterators, bool __cache>
struct _Local_iterator
- : public _Local_iterator_base<_Key, _Value, _ExtractKey,
+ : public _Local_iterator_base<_Key, _NodePtr, _ExtractKey,
_Hash, _RangeHash, _Unused, __cache>
{
private:
- using __base_type = _Local_iterator_base<_Key, _Value, _ExtractKey,
- _Hash, _RangeHash, _Unused, __cache>;
- using __hash_code_base = typename __base_type::__hash_code_base;
+ using __base_type = _Local_iterator_base<_Key, _NodePtr, _ExtractKey,
+ _Hash, _RangeHash, _Unused, __cache>;
+ using __node_type = typename std::pointer_traits<_NodePtr>::element_type;
public:
- typedef _Value value_type;
+ typedef typename __node_type::value_type value_type;
typedef typename std::conditional<__constant_iterators,
- const _Value*, _Value*>::type
+ const value_type*, value_type*>::type
pointer;
typedef typename std::conditional<__constant_iterators,
- const _Value&, _Value&>::type
+ const value_type&, value_type&>::type
reference;
typedef std::ptrdiff_t difference_type;
typedef std::forward_iterator_tag iterator_category;
_Local_iterator() = default;
- _Local_iterator(const __hash_code_base& __base,
- _Hash_node<_Value, __cache>* __n,
+ _Local_iterator(const _Hash& __hasher,
+ _NodePtr __n,
std::size_t __bkt, std::size_t __bkt_count)
- : __base_type(__base, __n, __bkt, __bkt_count)
+ : __base_type(__hasher, __n, __bkt, __bkt_count)
{ }
reference
@@ -1525,37 +1469,37 @@ namespace __detail
};
/// local const_iterators
- template<typename _Key, typename _Value, typename _ExtractKey,
+ template<typename _Key, typename _NodePtr, typename _ExtractKey,
typename _Hash, typename _RangeHash, typename _Unused,
bool __constant_iterators, bool __cache>
struct _Local_const_iterator
- : public _Local_iterator_base<_Key, _Value, _ExtractKey,
+ : public _Local_iterator_base<_Key, _NodePtr, _ExtractKey,
_Hash, _RangeHash, _Unused, __cache>
{
private:
- using __base_type = _Local_iterator_base<_Key, _Value, _ExtractKey,
- _Hash, _RangeHash, _Unused, __cache>;
- using __hash_code_base = typename __base_type::__hash_code_base;
+ using __base_type = _Local_iterator_base<_Key, _NodePtr, _ExtractKey,
+ _Hash, _RangeHash, _Unused, __cache>;
+ using __node_type = typename std::pointer_traits<_NodePtr>::element_type;
public:
- typedef _Value value_type;
- typedef const _Value* pointer;
- typedef const _Value& reference;
+ typedef typename __node_type::value_type value_type;
+ typedef const value_type* pointer;
+ typedef const value_type& reference;
typedef std::ptrdiff_t difference_type;
typedef std::forward_iterator_tag iterator_category;
_Local_const_iterator() = default;
- _Local_const_iterator(const __hash_code_base& __base,
- _Hash_node<_Value, __cache>* __n,
+ _Local_const_iterator(const _Hash& __hasher,
+ _NodePtr __n,
std::size_t __bkt, std::size_t __bkt_count)
- : __base_type(__base, __n, __bkt, __bkt_count)
+ : __base_type(__hasher, __n, __bkt, __bkt_count)
{ }
- _Local_const_iterator(const _Local_iterator<_Key, _Value, _ExtractKey,
- _Hash, _RangeHash, _Unused,
- __constant_iterators,
- __cache>& __x)
+ _Local_const_iterator(const _Local_iterator<_Key, _NodePtr, _ExtractKey,
+ _Hash, _RangeHash, _Unused,
+ __constant_iterators,
+ __cache>& __x)
: __base_type(__x)
{ }
@@ -1599,110 +1543,80 @@ namespace __detail
struct _Hashtable_base
: public _Hash_code_base<_Key, _Value, _ExtractKey, _Hash, _RangeHash,
_Unused, _Traits::__hash_cached::value>,
- private _Hashtable_ebo_helper<0, _Equal>
- {
- public:
- typedef _Key key_type;
- typedef _Value value_type;
- typedef _Equal key_equal;
- typedef std::size_t size_type;
- typedef std::ptrdiff_t difference_type;
-
- using __traits_type = _Traits;
- using __hash_cached = typename __traits_type::__hash_cached;
- using __constant_iterators = typename __traits_type::__constant_iterators;
- using __unique_keys = typename __traits_type::__unique_keys;
-
- using __hash_code_base = _Hash_code_base<_Key, _Value, _ExtractKey,
- _Hash, _RangeHash, _Unused,
- __hash_cached::value>;
-
- using __hash_code = typename __hash_code_base::__hash_code;
- using __node_type = typename __hash_code_base::__node_type;
-
- using iterator = _Node_iterator<value_type,
- __constant_iterators::value,
- __hash_cached::value>;
-
- using const_iterator = _Node_const_iterator<value_type,
- __constant_iterators::value,
- __hash_cached::value>;
-
- using local_iterator = _Local_iterator<key_type, value_type,
- _ExtractKey, _Hash, _RangeHash, _Unused,
- __constant_iterators::value,
- __hash_cached::value>;
-
- using const_local_iterator = _Local_const_iterator<key_type, value_type,
- _ExtractKey, _Hash, _RangeHash, _Unused,
- __constant_iterators::value,
- __hash_cached::value>;
-
- using __ireturn_type = typename std::conditional<__unique_keys::value,
- std::pair<iterator, bool>,
- iterator>::type;
- private:
- using _EqualEBO = _Hashtable_ebo_helper<0, _Equal>;
+ private _Hashtable_ebo_helper<0, _Equal>
+ {
+ public:
+ typedef _Key key_type;
+ typedef _Value value_type;
+ typedef _Equal key_equal;
+ typedef std::size_t size_type;
+ typedef std::ptrdiff_t difference_type;
- template<typename _NodeT>
- struct _Equal_hash_code
- {
- static bool
- _S_equals(__hash_code, const _NodeT&)
- { return true; }
+ using __traits_type = _Traits;
+ using __hash_cached = typename __traits_type::__hash_cached;
- static bool
- _S_node_equals(const _NodeT&, const _NodeT&)
- { return true; }
- };
+ using __hash_code_base = _Hash_code_base<_Key, _Value, _ExtractKey,
+ _Hash, _RangeHash, _Unused,
+ __hash_cached::value>;
- template<typename _Ptr2>
- struct _Equal_hash_code<_Hash_node<_Ptr2, true>>
- {
- static bool
- _S_equals(__hash_code __c, const _Hash_node<_Ptr2, true>& __n)
- { return __c == __n._M_hash_code; }
-
- static bool
- _S_node_equals(const _Hash_node<_Ptr2, true>& __lhn,
- const _Hash_node<_Ptr2, true>& __rhn)
- { return __lhn._M_hash_code == __rhn._M_hash_code; }
- };
+ using __hash_code = typename __hash_code_base::__hash_code;
- protected:
- _Hashtable_base() = default;
- _Hashtable_base(const _Hash& __hash, const _Equal& __eq)
- : __hash_code_base(__hash), _EqualEBO(__eq)
- { }
+ private:
+ using _EqualEBO = _Hashtable_ebo_helper<0, _Equal>;
- bool
- _M_equals(const _Key& __k, __hash_code __c, const __node_type* __n) const
- {
- static_assert(__is_invocable<const _Equal&, const _Key&, const _Key&>{},
+ static bool
+ _S_equals(__hash_code, const _Hash_node_code_cache<false>&)
+ { return true; }
+
+ static bool
+ _S_node_equals(const _Hash_node_code_cache<false>&,
+ const _Hash_node_code_cache<false>&)
+ { return true; }
+
+ static bool
+ _S_equals(__hash_code __c, const _Hash_node_code_cache<true>& __n)
+ { return __c == __n._M_hash_code; }
+
+ static bool
+ _S_node_equals(const _Hash_node_code_cache<true>& __lhn,
+ const _Hash_node_code_cache<true>& __rhn)
+ { return __lhn._M_hash_code == __rhn._M_hash_code; }
+
+ protected:
+ _Hashtable_base() = default;
+ _Hashtable_base(const _Hash& __hash, const _Equal& __eq)
+ : __hash_code_base(__hash), _EqualEBO(__eq)
+ { }
+
+ bool
+ _M_equals(const _Key& __k, __hash_code __c,
+ const _Hash_node_value<_Value, __hash_cached::value>& __n) const
+ {
+ static_assert(__is_invocable<const _Equal&, const _Key&, const _Key&>{},
"key equality predicate must be invocable with two arguments of "
"key type");
- return _Equal_hash_code<__node_type>::_S_equals(__c, *__n)
- && _M_eq()(__k, _ExtractKey{}(__n->_M_v()));
- }
+ return _S_equals(__c, __n) && _M_eq()(__k, _ExtractKey{}(__n._M_v()));
+ }
- bool
- _M_node_equals(const __node_type* __lhn, const __node_type* __rhn) const
- {
- return _Equal_hash_code<__node_type>::_S_node_equals(*__lhn, *__rhn)
- && _M_eq()(_ExtractKey{}(__lhn->_M_v()),
- _ExtractKey{}(__rhn->_M_v()));
- }
+ bool
+ _M_node_equals(
+ const _Hash_node_value<_Value, __hash_cached::value>& __lhn,
+ const _Hash_node_value<_Value, __hash_cached::value>& __rhn) const
+ {
+ return _S_node_equals(__lhn, __rhn)
+ && _M_eq()(_ExtractKey{}(__lhn._M_v()), _ExtractKey{}(__rhn._M_v()));
+ }
- void
- _M_swap(_Hashtable_base& __x)
- {
- __hash_code_base::_M_swap(__x);
- std::swap(_EqualEBO::_M_get(), __x._EqualEBO::_M_get());
- }
+ void
+ _M_swap(_Hashtable_base& __x)
+ {
+ __hash_code_base::_M_swap(__x);
+ std::swap(_EqualEBO::_M_get(), __x._EqualEBO::_M_get());
+ }
- const _Equal&
- _M_eq() const { return _EqualEBO::_M_cget(); }
- };
+ const _Equal&
+ _M_eq() const { return _EqualEBO::_M_cget(); }
+ };
/**
* Primary class template _Equality.
@@ -1744,27 +1658,24 @@ namespace __detail
_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits, true>::
_M_equal(const __hashtable& __other) const
{
- using __node_base = typename __hashtable::__node_base;
- using __node_type = typename __hashtable::__node_type;
const __hashtable* __this = static_cast<const __hashtable*>(this);
if (__this->size() != __other.size())
return false;
for (auto __itx = __this->begin(); __itx != __this->end(); ++__itx)
{
- std::size_t __ybkt = __other._M_bucket_index(__itx._M_cur);
- __node_base* __prev_n = __other._M_buckets[__ybkt];
+ std::size_t __ybkt = __other._M_bucket_index(*__itx._M_cur);
+ auto __prev_n = __other._M_buckets[__ybkt];
if (!__prev_n)
return false;
- for (__node_type* __n = static_cast<__node_type*>(__prev_n->_M_nxt);;
- __n = __n->_M_next())
+ for (auto __n = __prev_n->_M_nxt;; __n = __n->_M_nxt)
{
if (__n->_M_v() == *__itx)
break;
if (!__n->_M_nxt
- || __other._M_bucket_index(__n->_M_next()) != __ybkt)
+ || __other._M_bucket_index(*__n->_M_nxt) != __ybkt)
return false;
}
}
@@ -1797,8 +1708,6 @@ namespace __detail
_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits, false>::
_M_equal(const __hashtable& __other) const
{
- using __node_base = typename __hashtable::__node_base;
- using __node_type = typename __hashtable::__node_type;
const __hashtable* __this = static_cast<const __hashtable*>(this);
if (__this->size() != __other.size())
return false;
@@ -1813,24 +1722,24 @@ namespace __detail
++__itx_end)
++__x_count;
- std::size_t __ybkt = __other._M_bucket_index(__itx._M_cur);
- __node_base* __y_prev_n = __other._M_buckets[__ybkt];
+ std::size_t __ybkt = __other._M_bucket_index(*__itx._M_cur);
+ auto __y_prev_n = __other._M_buckets[__ybkt];
if (!__y_prev_n)
return false;
- __node_type* __y_n = static_cast<__node_type*>(__y_prev_n->_M_nxt);
+ auto __y_n = __y_prev_n->_M_nxt;
for (;;)
{
if (__this->key_eq()(_ExtractKey{}(__y_n->_M_v()),
_ExtractKey{}(*__itx)))
break;
- __node_type* __y_ref_n = __y_n;
- for (__y_n = __y_n->_M_next(); __y_n; __y_n = __y_n->_M_next())
- if (!__other._M_node_equals(__y_ref_n, __y_n))
+ auto __y_ref_n = __y_n;
+ for (__y_n = __y_n->_M_nxt; __y_n; __y_n = __y_n->_M_nxt)
+ if (!__other._M_node_equals(*__y_ref_n, *__y_n))
break;
- if (!__y_n || __other._M_bucket_index(__y_n) != __ybkt)
+ if (!__y_n || __other._M_bucket_index(*__y_n) != __ybkt)
return false;
}
@@ -1868,11 +1777,13 @@ namespace __detail
using __value_alloc_traits = typename __node_alloc_traits::template
rebind_traits<typename __node_type::value_type>;
- using __node_base = __detail::_Hash_node_base;
- using __bucket_type = __node_base*;
- using __bucket_alloc_type =
- __alloc_rebind<__node_alloc_type, __bucket_type>;
- using __bucket_alloc_traits = std::allocator_traits<__bucket_alloc_type>;
+ using __node_pointer = typename __node_alloc_traits::pointer;
+ using __node_base = __detail::_Hash_node_base<__node_pointer>;
+ using __node_base_ptr = __ptr_rebind<__node_pointer, __node_base>;
+ using __buckets_alloc_type =
+ __alloc_rebind<__node_alloc_type, __node_base_ptr>;
+ using __buckets_alloc_traits = std::allocator_traits<__buckets_alloc_type>;
+ using __buckets_pointer = typename __buckets_alloc_traits::pointer;
_Hashtable_alloc() = default;
_Hashtable_alloc(const _Hashtable_alloc&) = default;
@@ -1893,27 +1804,27 @@ namespace __detail
// Allocate a node and construct an element within it.
template<typename... _Args>
- __node_type*
+ __node_pointer
_M_allocate_node(_Args&&... __args);
// Destroy the element within a node and deallocate the node.
void
- _M_deallocate_node(__node_type* __n);
+ _M_deallocate_node(__node_pointer __n);
// Deallocate a node.
void
- _M_deallocate_node_ptr(__node_type* __n);
+ _M_deallocate_node_ptr(__node_pointer __n);
// Deallocate the linked list of nodes pointed to by __n.
// The elements within the nodes are destroyed.
void
- _M_deallocate_nodes(__node_type* __n);
+ _M_deallocate_nodes(__node_pointer __n);
- __bucket_type*
+ __buckets_pointer
_M_allocate_buckets(std::size_t __bkt_count);
void
- _M_deallocate_buckets(__bucket_type*, std::size_t __bkt_count);
+ _M_deallocate_buckets(__buckets_pointer, std::size_t __bkt_count);
};
// Definitions of class template _Hashtable_alloc's out-of-line member
@@ -1922,17 +1833,16 @@ namespace __detail
template<typename... _Args>
auto
_Hashtable_alloc<_NodeAlloc>::_M_allocate_node(_Args&&... __args)
- -> __node_type*
+ -> __node_pointer
{
auto __nptr = __node_alloc_traits::allocate(_M_node_allocator(), 1);
- __node_type* __n = std::__to_address(__nptr);
__try
{
- ::new ((void*)__n) __node_type;
+ ::new ((void*)std::__to_address(__nptr)) __node_type;
__node_alloc_traits::construct(_M_node_allocator(),
- __n->_M_valptr(),
+ __nptr->_M_valptr(),
std::forward<_Args>(__args)...);
- return __n;
+ return __nptr;
}
__catch(...)
{
@@ -1943,55 +1853,53 @@ namespace __detail
template<typename _NodeAlloc>
void
- _Hashtable_alloc<_NodeAlloc>::_M_deallocate_node(__node_type* __n)
+ _Hashtable_alloc<_NodeAlloc>::_M_deallocate_node(__node_pointer __nptr)
{
- __node_alloc_traits::destroy(_M_node_allocator(), __n->_M_valptr());
- _M_deallocate_node_ptr(__n);
+ __node_alloc_traits::destroy(_M_node_allocator(), __nptr->_M_valptr());
+ _M_deallocate_node_ptr(__nptr);
}
template<typename _NodeAlloc>
void
- _Hashtable_alloc<_NodeAlloc>::_M_deallocate_node_ptr(__node_type* __n)
+ _Hashtable_alloc<_NodeAlloc>::_M_deallocate_node_ptr(__node_pointer __nptr)
{
- typedef typename __node_alloc_traits::pointer _Ptr;
- auto __ptr = std::pointer_traits<_Ptr>::pointer_to(*__n);
- __n->~__node_type();
- __node_alloc_traits::deallocate(_M_node_allocator(), __ptr, 1);
+ __nptr->~__node_type();
+ __node_alloc_traits::deallocate(_M_node_allocator(), __nptr, 1);
}
template<typename _NodeAlloc>
void
- _Hashtable_alloc<_NodeAlloc>::_M_deallocate_nodes(__node_type* __n)
+ _Hashtable_alloc<_NodeAlloc>::_M_deallocate_nodes(__node_pointer __nptr)
{
- while (__n)
+ while (__nptr)
{
- __node_type* __tmp = __n;
- __n = __n->_M_next();
+ __node_pointer __tmp = __nptr;
+ __nptr = __nptr->_M_nxt;
_M_deallocate_node(__tmp);
}
}
template<typename _NodeAlloc>
- typename _Hashtable_alloc<_NodeAlloc>::__bucket_type*
- _Hashtable_alloc<_NodeAlloc>::_M_allocate_buckets(std::size_t __bkt_count)
+ auto
+ _Hashtable_alloc<_NodeAlloc>::
+ _M_allocate_buckets(std::size_t __bkt_count)
+ -> __buckets_pointer
{
- __bucket_alloc_type __alloc(_M_node_allocator());
+ __buckets_alloc_type __alloc(_M_node_allocator());
- auto __ptr = __bucket_alloc_traits::allocate(__alloc, __bkt_count);
- __bucket_type* __p = std::__to_address(__ptr);
- __builtin_memset(__p, 0, __bkt_count * sizeof(__bucket_type));
- return __p;
+ auto __ptr = __buckets_alloc_traits::allocate(__alloc, __bkt_count);
+ std::fill_n(__ptr, __bkt_count, nullptr);
+ return __ptr;
}
template<typename _NodeAlloc>
void
- _Hashtable_alloc<_NodeAlloc>::_M_deallocate_buckets(__bucket_type* __bkts,
- std::size_t __bkt_count)
+ _Hashtable_alloc<_NodeAlloc>::
+ _M_deallocate_buckets(__buckets_pointer __bkts,
+ std::size_t __bkt_count)
{
- typedef typename __bucket_alloc_traits::pointer _Ptr;
- auto __ptr = std::pointer_traits<_Ptr>::pointer_to(*__bkts);
- __bucket_alloc_type __alloc(_M_node_allocator());
- __bucket_alloc_traits::deallocate(__alloc, __ptr, __bkt_count);
+ __buckets_alloc_type __alloc(_M_node_allocator());
+ __buckets_alloc_traits::deallocate(__alloc, __bkts, __bkt_count);
}
//@} hashtable-detail
diff --git a/libstdc++-v3/testsuite/23_containers/unordered_map/allocator/ext_ptr.cc b/libstdc++-v3/testsuite/23_containers/unordered_map/allocator/ext_ptr.cc
new file mode 100644
index 00000000000..e9d7ada7151
--- /dev/null
+++ b/libstdc++-v3/testsuite/23_containers/unordered_map/allocator/ext_ptr.cc
@@ -0,0 +1,57 @@
+// Copyright (C) 2020 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library. This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3. If not see
+// <http://www.gnu.org/licenses/>.
+
+// { dg-do run { target c++11 } }
+
+#include <unordered_map>
+#include <memory>
+#include <testsuite_hooks.h>
+#include <testsuite_allocator.h>
+
+struct T { int i; };
+
+bool operator==(const T& l, const T& r)
+{ return l.i == r.i; }
+
+struct H
+{
+ std::size_t
+ operator()(const T& t) const noexcept
+ { return t.i; }
+};
+
+struct E : std::equal_to<T>
+{ };
+
+using __gnu_test::CustomPointerAlloc;
+
+template class std::unordered_map<T, int, H, E,
+ CustomPointerAlloc<std::pair<const T, int>>>;
+
+void test01()
+{
+ typedef CustomPointerAlloc<std::pair<const T, int>> alloc_type;
+ typedef std::unordered_map<T, int, H, E, alloc_type> test_type;
+ test_type v;
+ v.insert({ T(), 0 });
+ VERIFY( ++v.begin() == v.end() );
+}
+
+int main()
+{
+ test01();
+}
diff --git a/libstdc++-v3/testsuite/23_containers/unordered_multimap/allocator/ext_ptr.cc b/libstdc++-v3/testsuite/23_containers/unordered_multimap/allocator/ext_ptr.cc
new file mode 100644
index 00000000000..4a895a6302c
--- /dev/null
+++ b/libstdc++-v3/testsuite/23_containers/unordered_multimap/allocator/ext_ptr.cc
@@ -0,0 +1,57 @@
+// Copyright (C) 2020 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library. This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3. If not see
+// <http://www.gnu.org/licenses/>.
+
+// { dg-do run { target c++11 } }
+
+#include <unordered_map>
+#include <memory>
+#include <testsuite_hooks.h>
+#include <testsuite_allocator.h>
+
+struct T { int i; };
+
+bool operator==(const T& l, const T& r)
+{ return l.i == r.i; }
+
+struct H
+{
+ std::size_t
+ operator()(const T& t) const noexcept
+ { return t.i; }
+};
+
+struct E : std::equal_to<T>
+{ };
+
+using __gnu_test::CustomPointerAlloc;
+
+template class std::unordered_multimap<T, int, H, E,
+ CustomPointerAlloc<std::pair<const T, int>>>;
+
+void test01()
+{
+ typedef CustomPointerAlloc<std::pair<const T, int>> alloc_type;
+ typedef std::unordered_multimap<T, int, H, E, alloc_type> test_type;
+ test_type v;
+ v.insert({ T(), 0 });
+ VERIFY( ++v.begin() == v.end() );
+}
+
+int main()
+{
+ test01();
+}
diff --git a/libstdc++-v3/testsuite/23_containers/unordered_multiset/allocator/ext_ptr.cc b/libstdc++-v3/testsuite/23_containers/unordered_multiset/allocator/ext_ptr.cc
new file mode 100644
index 00000000000..36b5e10cc7b
--- /dev/null
+++ b/libstdc++-v3/testsuite/23_containers/unordered_multiset/allocator/ext_ptr.cc
@@ -0,0 +1,56 @@
+// Copyright (C) 2020 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library. This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3. If not see
+// <http://www.gnu.org/licenses/>.
+
+// { dg-do run { target c++11 } }
+
+#include <unordered_set>
+#include <memory>
+#include <testsuite_hooks.h>
+#include <testsuite_allocator.h>
+
+struct T { int i; };
+
+bool operator==(const T& l, const T& r)
+{ return l.i == r.i; }
+
+struct H
+{
+ std::size_t
+ operator()(const T& t) const noexcept
+ { return t.i; }
+};
+
+struct E : std::equal_to<T>
+{ };
+
+using __gnu_test::CustomPointerAlloc;
+
+template class std::unordered_multiset<T, H, E, CustomPointerAlloc<T>>;
+
+void test01()
+{
+ typedef CustomPointerAlloc<T> alloc_type;
+ typedef std::unordered_multiset<T, H, E, alloc_type> test_type;
+ test_type v;
+ v.insert(T());
+ VERIFY( ++v.begin() == v.end() );
+}
+
+int main()
+{
+ test01();
+}
diff --git a/libstdc++-v3/testsuite/23_containers/unordered_set/allocator/ext_ptr.cc b/libstdc++-v3/testsuite/23_containers/unordered_set/allocator/ext_ptr.cc
index f6b908ac03e..479104709fb 100644
--- a/libstdc++-v3/testsuite/23_containers/unordered_set/allocator/ext_ptr.cc
+++ b/libstdc++-v3/testsuite/23_containers/unordered_set/allocator/ext_ptr.cc
@@ -15,10 +15,7 @@
// with this library; see the file COPYING3. If not see
// <http://www.gnu.org/licenses/>.
-// This test fails to compile since C++17 (see xfail-if below) so we can only
-// do a "run" test for C++11 and C++14, and a "compile" test for C++17 and up.
-// { dg-do run { target { c++11_only || c++14_only } } }
-// { dg-do compile { target c++17 } }
+// { dg-do run { target { c++11 } } }
#include <unordered_set>
#include <memory>
@@ -26,15 +23,22 @@
#include <testsuite_allocator.h>
struct T { int i; };
-bool operator==(const T& l, const T& r) { return l.i == r.i; }
-struct H { std::size_t operator()(const T& t) const noexcept { return t.i; }
+
+bool operator==(const T& l, const T& r)
+{ return l.i == r.i; }
+
+struct H
+{
+ std::size_t
+ operator()(const T& t) const noexcept
+ { return t.i; }
};
-struct E : std::equal_to<T> { };
+
+struct E : std::equal_to<T>
+{ };
using __gnu_test::CustomPointerAlloc;
-// { dg-xfail-if "node reinsertion assumes raw pointers" { c++17 } }
-// TODO when removing this xfail change the test back to "dg-do run".
template class std::unordered_set<T, H, E, CustomPointerAlloc<T>>;
void test01()
next prev parent reply other threads:[~2020-09-28 20:37 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-19 17:31 François Dumont
2020-05-15 21:12 ` François Dumont
2020-09-28 20:37 ` François Dumont [this message]
2020-10-20 11:04 ` Jonathan Wakely
2020-10-20 17:26 ` François Dumont
2020-11-01 21:48 ` François Dumont
2020-11-02 14:11 ` Jonathan Wakely
2020-11-02 21:33 ` François Dumont
2021-01-11 18:10 ` François Dumont
2021-06-10 17:22 ` François Dumont
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c2806a4d-055f-613f-cd4a-6aba073651bc@gmail.com \
--to=frs.dumont@gmail.com \
--cc=libstdc++@gcc.gnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).