From: "François Dumont" <frs.dumont@gmail.com>
To: Jonathan Wakely <jwakely@redhat.com>
Cc: "libstdc++@gcc.gnu.org" <libstdc++@gcc.gnu.org>,
gcc-patches <gcc-patches@gcc.gnu.org>
Subject: Re: libstdc++ PR 57272 Fancy pointer support in Hashtable
Date: Mon, 11 Jan 2021 19:10:24 +0100 [thread overview]
Message-ID: <636a415e-84c6-64f9-a408-638909c82a4f@gmail.com> (raw)
In-Reply-To: <20201102141120.GR503596@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 17596 bytes --]
Hi
I had another look to this attempt to properly support alloc fancy
pointers.
I consider all your remarks appart from the big one below about all
this being a waste of time :-)
I do not see why we should use the alloc fancy pointer type in our
_Hashtable implementation details. It is not noticeable from a user
stand point unless he wants to track all dereferencements or '->'
operator usages which would be quite odd.
For now I just consider that we should store the fancy pointer
coming from the allocator::allocate calls as-is and return it to the
allocator when needed without the pointer_traits::pointer_to as we used
to do. This should preserve any additional data the allocator might
associate to the raw pointer in the allocator.
Even if the Standard is saying we should extend the fancy pointer
usage this patch is still a good 1st step which is unavoidable to
complete the potential final picture. We could still provide this for
now and see if users have complains about it.
This patch is implementing a small refinement by using fancy
pointer move semantic in a couple of situations. I see that node_handle
is not doing this but I consider it as a potential node handle enhancement.
I am completing execution of tests but unordered ones are OK for
both normal and debug modes.
libstdc++: Store allocator::pointer in hashtable implementation
In _Hashtable implementation store the allocator::pointer returned
by the allocate
call as-is and return it on the deallocate when necessary. This is
true for both
allocate nodes and buckets.
Note that internnally, as an implementation detail, we are still
using raw pointers
in iterators and buckets.
libstdc++-v3/ChangeLog:
* include/bits/hashtable_policy.h
(__alloc_val_ptr<>): New template alias.
(_ReuseOrAllocNode<>::__node_type): Remove.
(_ReuseOrAllocNode<>::__node_ptr): New.
(_ReuseOrAllocNode<>::operator()(_Arg&&)): Return latter.
(_ReuseOrAllocNode(__node_ptr, __hashtable_alloc&)): Adapt
to use latter.
(_ReuseOrAllocNode(_Hash_node_base*, __hashtable_alloc&)): New.
(_AllocNode<>::__node_type): Remove.
(_AllocNode<>::__node_ptr): New.
(_AllocNode<>::operator()<>(_Arg&&)): Return latter.
(_Hash_pnode_base<>): New.
(_Hash_node<>::__node_base): New.
(_Hash_node<>::__node_ptr): New.
(_Hash_node<>::__node_type): New.
(_Hash_node<>::__node_value_cache_type): New.
(_Hash_node<>::_M_next_ptr()): New.
(_Hash_pnode<typename _Ptr, bool _Cache_hash_code>): New.
(__get_node_type<>): New, template alias to _Hash_node<> if
allocator pointer
type is a raw pointer, _Hash_pnode<> otherwise..
(_Hashtable_iterator_base<typename _NodeType>): New.
(_Node_iterator_base<>): Inherits from latter.
(_Hashtable_iterator<typename _NodeType, bool
__constant_iterators>):
New.
(_Hashtable_const_iterator<typename _NodeType, bool
__constant_iterators>):
New.
(_Insert_base<>::__alloc_ptr): New.
(_Insert_base<>::__hashtable_alloc): Remove.
(_Insert_base<>::__node_type): New.
(_Insert_base<>::iterator): Define conditionally to
_Node_iterator<>
or _Hashtable_iterator<> depending on __alloc_ptr being a
raw pointer.
(_Insert_base<>::const_iterator): Define conditionally to
_Node_const_iterator<> or _Hashtable_const_iterator<>
depending on
__alloc_ptr being a raw pointer.
(_Hashtable_local_iter_base<>): New.
(_Hashtable_local_iterator<>): New.
(_Hashtable_const_local_iterator<>): New.
(__local_iterator<>): New template alias.
(__const_local_iterator<>): New template alias.
(_Hashtable_base<>::_M_equals(const _Key&, __hash_code,
const _Hash_node_cache_value<>&): New.
(_Hashtable_base<>::_M_node_equals(const
_Hash_node_cache_value<>&,
const _Hash_node_cache_value<>&)): New.
(_Hashtable_alloc<>::__value_alloc_traits): Remove.
(_Hashtable_alloc<>::__node_base_ptr): Remove.
* include/bits/hashtable.h (_Hashtable<>): Adapt.
*
testsuite/23_containers/unordered_map/allocator/ext_ptr.cc: New test.
*
testsuite/23_containers/unordered_multimap/allocator/ext_ptr.cc:
New test.
*
testsuite/23_containers/unordered_multiset/allocator/ext_ptr.cc:
New test.
*
testsuite/23_containers/unordered_set/allocator/ext_ptr.cc: Adapt.
Ok to commit ? (even if in a few months)
François
On 02/11/20 3:11 pm, Jonathan Wakely wrote:
> On 01/11/20 22:48 +0100, François Dumont via Libstdc++ wrote:
>> Here is an other attempt.
>>
>> This time I am storing the node using allocator pointer just in the
>> singly linked list of nodes. Buckets are still __node_base* so that
>> the custom pointer is not manipulated too much. Moreover iterators
>> are also using node raw pointers.
>
> There's no point doing it if you still use raw pointers.
>
> It either has to be done completely, or it's a waste of time and
> energy.
>
>> As advised I introduced new types in case of custom pointers. But I
>> am also using those in gnu versioned namespace so that I could more
>> easily test this new code with all existing test cases.
>>
>> Note that as we are at introducing a new abi I am also changing node
>> memory layout in this case. I think it is better to put the hash code
>> cache before the node value rather than after. It will be closer to
>> the begining of the node and so accessible without mem page fault.
>
> I think that's a bad idea. if somebody is using a fancy pointer which
> is just a thin wrapper around a real pointer, and it has implicit
> conversions to/from the real pointer), then I think it might work OK
> today. For example, __gnu_cxx::_Ext_pointer in <ext/pointer.h>. If the
> size and alignment of the fancy pointer is just the same as the real
> pointer, and the layout of the node classes doesn't change order, I
> think that will probably Just Work.
>
> If you reorder the node members, it definitely won't work.
>
> Is this a realistic scenario? I don't know. It might be.
>
> If we want to do that only for the versioned namespace, that would be
> OK, but should be a separate patch.
>
> I'm also concerned about the number of differences that depend on
> _GLIBCXX_INLINE_VERSION. The code gets a lot less maintainable with so
> many differences, and they only exist to support a mode nobody uses.
>
> Wouldn't implementing https://wg21.link/P0809R0 or
> https://wg21.link/P0919R3 (and https://wg21.link/P1690R1) be a better
> use of time?
>
> There are a couple more comments below, for things that I noticed
> while quickly skimming over the patch.
>
>
>> To be clear the node mem layout is:
>> - next node pointer
>> - node value_type
>> - hash code (optional)
>>
>> The new node mem layout is:
>> - next node pointer
>> - hash code (optional)
>> - node value_type
>>
>> Here is the git log in case you validate it.
>>
>> Â Â Â libstdc++: Store allocator::pointer in hashtable implementation
>>
>> Â Â Â Use allocator pointer type in _Hashtable implementation.
>>
>> Â Â Â Â Â Â Â Â Â Â Â * include/bits/hashtable_policy.h
>> Â Â Â Â Â Â Â Â Â Â Â (_ReuseOrAllocNode<>::__node_type): Remove.
>> Â Â Â Â Â Â Â Â Â Â Â (_ReuseOrAllocNode<>::__node_pointer): New.
>> Â Â Â Â Â Â Â Â Â Â Â (_ReuseOrAllocNode(__node_pointer,
>> __hashtable_alloc&)): Adapt to use
>> Â Â Â Â Â Â Â Â Â Â Â latter.
>> (_ReuseOrAllocNode<>::operator()(_Arg&&)): Return latter.
>> Â Â Â Â Â Â Â Â Â Â Â (_AllocNode<>::__node_type): Remove.
>> Â Â Â Â Â Â Â Â Â Â Â (_AllocNode<>::__node_pointer): New.
>> (_AllocNode<>::operator()<>(_Arg&&)): Return latter.
>> Â Â Â Â Â Â Â Â Â Â Â (_Hash_node_cust_ptr_base<>): New.
>> Â Â Â Â Â Â Â Â Â Â Â (_Hash_node_cache_value<typename _Value, bool
>> _Cache_hash_code>): New.
>> Â Â Â Â Â Â Â Â Â Â Â (_Hash_node<>::__node_base): New.
>> Â Â Â Â Â Â Â Â Â Â Â (_Hash_node<>::__node_ptr): New.
>> Â Â Â Â Â Â Â Â Â Â Â (_Hash_node<>::__node_type): New.
>> Â Â Â Â Â Â Â Â Â Â Â (_Hash_node<>::__node_value_cache_type): New.
>> Â Â Â Â Â Â Â Â Â Â Â (_Hash_node<>::_M_next_ptr()): New.
>> Â Â Â Â Â Â Â Â Â Â Â (_Hash_cust_ptr_node<typename _Ptr, bool
>> _Cache_hash_code>): New.
>> Â Â Â Â Â Â Â Â Â Â Â (_Hashtable_iterator_base<typename
>> _NodeType>): New.
>> Â Â Â Â Â Â Â Â Â Â Â (_Node_iterator_base<>): Inherits from latter.
>> Â Â Â Â Â Â Â Â Â Â Â (_Hashtable_iterator<typename _NodeType, bool
>> __constant_iterators>):
>> Â Â Â Â Â Â Â Â Â Â Â New.
>> Â Â Â Â Â Â Â Â Â Â Â (_Hashtable_const_iterator<typename _NodeType,
>> bool __constant_iterators>):
>> Â Â Â Â Â Â Â Â Â Â Â New.
>> Â Â Â Â Â Â Â Â Â Â Â (_Insert_base<>::__alloc_ptr): New.
>> Â Â Â Â Â Â Â Â Â Â Â (_Insert_base<>::__node_type): New. Define
>> conditionally to _Hash_node<>
>> Â Â Â Â Â Â Â Â Â Â Â or _Hash_cust_ptr_node<> depending on
>> __alloc_ptr being a raw pointer.
>> Â Â Â Â Â Â Â Â Â Â Â (_Insert_base<>::__node_alloc_type): New.
>> Â Â Â Â Â Â Â Â Â Â Â (_Insert_base<>::__hashtable_alloc): Remove.
>> Â Â Â Â Â Â Â Â Â Â Â (_Insert_base<>::iterator): Define
>> conditionally to _Node_iterator<>
>> Â Â Â Â Â Â Â Â Â Â Â or _Hashtable_iterator<> depending on
>> __alloc_ptr being a raw pointer.
>> Â Â Â Â Â Â Â Â Â Â Â (_Insert_base<>::const_iterator): Define
>> conditionally to
>> Â Â Â Â Â Â Â Â Â Â Â _Node_const_iterator<> or
>> _Hashtable_const_iterator<> depending on
>> Â Â Â Â Â Â Â Â Â Â Â __alloc_ptr being a raw pointer.
>> Â Â Â Â Â Â Â Â Â Â Â (_Hashtable_local_iter_base<>): New.
>> Â Â Â Â Â Â Â Â Â Â Â (_Hash_code_base<>::_M_bucket_index(const
>> _Hash_node_cache_value<>&,
>> Â Â Â Â Â Â Â Â Â Â Â size_t)): New.
>> Â Â Â Â Â Â Â Â Â Â Â (_Hashtable_local_iter_base<>): New.
>> Â Â Â Â Â Â Â Â Â Â Â (_Hashtable_local_iterator<>): New.
>> Â Â Â Â Â Â Â Â Â Â Â (_Hashtable_const_local_iterator<>): New.
>> Â Â Â Â Â Â Â Â Â Â Â (_Hashtable_base<>::_M_equals(const _Key&,
>> __hash_code,
>> Â Â Â Â Â Â Â Â Â Â Â const _Hash_node_cache_value<>&): New.
>> Â Â Â Â Â Â Â Â Â Â Â (_Hashtable_base<>::_M_node_equals(const
>> _Hash_node_cache_value<>&,
>> Â Â Â Â Â Â Â Â Â Â Â const _Hash_node_cache_value<>&)): New.
>> Â Â Â Â Â Â Â Â Â Â Â (_Hashtable_alloc<>::__value_alloc_traits):
>> Remove.
>> Â Â Â Â Â Â Â Â Â Â Â (_Hashtable_alloc<>::__node_base_ptr): Remove.
>> Â Â Â Â Â Â Â Â Â Â Â * include/bits/hashtable.h (_Hashtable<>): Adapt.
>> Â Â Â Â Â Â Â Â Â Â Â *
>> testsuite/23_containers/unordered_map/allocator/ext_ptr.cc: New test.
>> Â Â Â Â Â Â Â Â Â Â Â *
>> testsuite/23_containers/unordered_multimap/allocator/ext_ptr.cc:
>> Â Â Â Â Â Â Â Â Â Â Â New test.
>> Â Â Â Â Â Â Â Â Â Â Â *
>> testsuite/23_containers/unordered_multiset/allocator/ext_ptr.cc:
>> Â Â Â Â Â Â Â Â Â Â Â New test.
>> Â Â Â Â Â Â Â Â Â Â Â *
>> testsuite/23_containers/unordered_set/allocator/ext_ptr.cc: Adapt.
>>
>> Tested under Linux x86_64 normal and version namespace modes.
>>
>> François
>>
>>
>> On 20/10/20 1:04 pm, Jonathan Wakely wrote:
>>> On 28/09/20 22:37 +0200, François Dumont via Libstdc++ wrote:
>>>> Following recent changes on _Hashtable I rebase the patch and
>>>> completely review it.
>>>>
>>>> I managed to integrate the allocator custom pointer type without
>>>> touching to _Hashtable base types like _Hash_code_base or
>>>> _Hashtable_base. However I cannot see how to use the custom pointer
>>>> type without impacting the node types like _Hash_node_base which
>>>> now takes a template parameter, the custom pointer type.
>>>>
>>>> On an abi point of view node types are different however the data
>>>> structure is the same. The only difference is that the
>>>> _Hash_node_base _M_nxt is now a _Hash_node<> custom pointer rather
>>>> than a simple _Hash_node_base*.
>>>>
>>>> Even if this patch can't go in because of the abi breaking change I
>>>> am going to adapt some of the code simplifications for master.
>>>> Especially the _Hash_code_base and _Local_iterator_base
>>>> simplifications.
>>>>
>>>> Let me know if you can think of a way to integrate the custom
>>>> pointer without impacting abi. Unless impacting node types and
>>>> associated iterator types is fine even if I already noticed that
>>>> pretty printer tests are broken with those changes.
>>>
>>> The approach I used for the other containers (which was never
>>> completed and committed) is something like:
>>>
>>> struct _Node_base
>>> {
>>> Â _Node_base* _M_next;
>>> };
>>>
>>> template<typename _Ptr>
>>> struct _Fancy_node_base
>>> {
>>> Â _Ptr _M_next;
>>> };
>>>
>>> template<typename _Ptr>
>>> Â using node_base = conditional_t<is_pointer<_Ptr>::value,
>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â
>>> _Node_base,
>>> _Fancy_node_base<_Ptr>>;
>>>
>>> This way all existing code that has allocators with non-fancy pointers
>>> continues to use the same type. Code using fancy pointers (which
>>> doesn't currently work properly anyway) changes to use the new types
>>> that depend on the pointer type.
>>>
>>
>
>> diff --git a/libstdc++-v3/include/bits/hashtable.h
>> b/libstdc++-v3/include/bits/hashtable.h
>> index 6c6c5edde0b..86644d447ca 100644
>> --- a/libstdc++-v3/include/bits/hashtable.h
>> +++ b/libstdc++-v3/include/bits/hashtable.h
>> @@ -182,8 +182,19 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
>> _RehashPolicy, _Traits>,
>> private __detail::_Hashtable_alloc<
>> __alloc_rebind<_Alloc,
>> - __detail::_Hash_node<_Value,
>> - _Traits::__hash_cached::value>>>
>> +#if _GLIBCXX_INLINE_VERSION
>> + __detail::_Hash_cust_ptr_node<
>> + __detail::__alloc_val_ptr<_Alloc, _Value>,
>> + _Traits::__hash_cached::value>>>
>> +#else
>> + typename std::conditional<
>> + std::__is_pointer<
>> + __detail::__alloc_val_ptr<_Alloc, _Value>>::__value,
>> + __detail::_Hash_node<_Value, _Traits::__hash_cached::value>,
>> + __detail::_Hash_cust_ptr_node<
>> + __detail::__alloc_val_ptr<_Alloc, _Value>,
>> + _Traits::__hash_cached::value>>::type>>
>> +#endif
>
> This ugliness should be hidden behind an alias template.
>
> Use is_pointer<P>::value, not std::__is_pointer<P>::__value.
> This is C++11 code, there's no need to use the C++98 traits. And you
> don't need the std:: qualification.
>
> _Hash_cust_ptr_node is not a good name if it's also used (sometimes)
> for normal pointers. How about _Hash_pnode, or something like that?
>
>
>> +
>> + /**
>> + * Primary template struct _Hash_cust_ptr_node.
>
> This comment is not useful. It shouldn't a Doxygen comment, because
> this is not something we need to put in the API documentation for
> end-users. I can tell it's the primary template, because I can read
> C++. What is the type for? How is it different to _Hash_node? That's
> what I'd like to read here.
>
>> + */
>> + template<typename _Ptr, bool _Cache_hash_code>
>> + struct _Hash_cust_ptr_node
>> + : _Hash_node_cust_ptr_base<__ptr_rebind<_Ptr,
>> + _Hash_cust_ptr_node<_Ptr, _Cache_hash_code>>>
>> + , _Hash_node_cache_value<typename
>> std::pointer_traits<_Ptr>::element_type,
>> + _Cache_hash_code>
>
[-- Attachment #2: hashtable.patch --]
[-- Type: text/x-patch, Size: 62376 bytes --]
diff --git a/libstdc++-v3/include/bits/hashtable.h b/libstdc++-v3/include/bits/hashtable.h
index bc7ec926155..7b9e2f3111a 100644
--- a/libstdc++-v3/include/bits/hashtable.h
+++ b/libstdc++-v3/include/bits/hashtable.h
@@ -182,8 +182,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_RehashPolicy, _Traits>,
private __detail::_Hashtable_alloc<
__alloc_rebind<_Alloc,
- __detail::_Hash_node<_Value,
- _Traits::__hash_cached::value>>>
+ __detail::__get_node_type<_Alloc, _Value,
+ _Traits::__hash_cached::value>>>
{
static_assert(is_same<typename remove_cv<_Value>::type, _Value>::value,
"unordered container must have a non-const, non-volatile value_type");
@@ -195,21 +195,21 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
using __traits_type = _Traits;
using __hash_cached = typename __traits_type::__hash_cached;
using __constant_iterators = typename __traits_type::__constant_iterators;
- using __node_type = __detail::_Hash_node<_Value, __hash_cached::value>;
+ using __alloc_ptr = __detail::__alloc_val_ptr<_Alloc, _Value>;
+ using __node_type = __detail::__get_node_type<
+ _Alloc, _Value, _Traits::__hash_cached::value>;
+ using __node_base = typename __node_type::__node_base;
+ using __node_value_type = typename __node_type::__node_value_cache_type;
using __node_alloc_type = __alloc_rebind<_Alloc, __node_type>;
-
using __hashtable_alloc = __detail::_Hashtable_alloc<__node_alloc_type>;
- using __node_value_type =
- __detail::_Hash_node_value<_Value, __hash_cached::value>;
using __node_ptr = typename __hashtable_alloc::__node_ptr;
- using __value_alloc_traits =
- typename __hashtable_alloc::__value_alloc_traits;
using __node_alloc_traits =
typename __hashtable_alloc::__node_alloc_traits;
- using __node_base = typename __hashtable_alloc::__node_base;
- using __node_base_ptr = typename __hashtable_alloc::__node_base_ptr;
+ using __value_alloc_traits =
+ typename __node_alloc_traits::template rebind_traits<_Value>;
using __buckets_ptr = typename __hashtable_alloc::__buckets_ptr;
+ using __buckets_ptr_traits = std::pointer_traits<__buckets_ptr>;
using __insert_base = __detail::_Insert<_Key, _Value, _Alloc, _ExtractKey,
_Equal, _Hash,
@@ -233,15 +233,15 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
using const_iterator = typename __insert_base::const_iterator;
- using local_iterator = __detail::_Local_iterator<key_type, _Value,
- _ExtractKey, _Hash, _RangeHash, _Unused,
- __constant_iterators::value,
- __hash_cached::value>;
+ using local_iterator = __detail::__local_iterator<
+ __node_ptr, key_type, value_type,
+ _ExtractKey, _Hash, _RangeHash, _Unused,
+ __constant_iterators::value, __hash_cached::value>;
- using const_local_iterator = __detail::_Local_const_iterator<
- key_type, _Value,
- _ExtractKey, _Hash, _RangeHash, _Unused,
- __constant_iterators::value, __hash_cached::value>;
+ using const_local_iterator = __detail::__const_local_iterator<
+ __node_ptr, key_type, value_type,
+ _ExtractKey, _Hash, _RangeHash, _Unused,
+ __constant_iterators::value, __hash_cached::value>;
private:
using __rehash_type = _RehashPolicy;
@@ -279,8 +279,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
struct _Scoped_node
{
// Take ownership of a node with a constructed element.
- _Scoped_node(__node_ptr __n, __hashtable_alloc* __h)
- : _M_h(__h), _M_node(__n) { }
+ _Scoped_node(__node_ptr&& __n, __hashtable_alloc* __h)
+ : _M_h(__h), _M_node(std::move(__n)) { }
// Allocate a node and construct an element within it.
template<typename... _Args>
@@ -374,7 +374,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
#endif
private:
- __buckets_ptr _M_buckets = &_M_single_bucket;
+ __buckets_ptr _M_buckets =
+ __buckets_ptr_traits::pointer_to(_M_single_bucket);
size_type _M_bucket_count = 1;
__node_base _M_before_begin;
size_type _M_element_count = 0;
@@ -386,13 +387,13 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// qualified.
// Note that we can't leave hashtable with 0 bucket without adding
// numerous checks in the code to avoid 0 modulus.
- __node_base_ptr _M_single_bucket = nullptr;
+ __node_base* _M_single_bucket = nullptr;
void
_M_update_bbegin()
{
- if (_M_begin())
- _M_buckets[_M_bucket_index(*_M_begin())] = &_M_before_begin;
+ if (auto __begin = _M_begin())
+ _M_buckets[_M_bucket_index(*__begin)] = &_M_before_begin;
}
void
@@ -402,9 +403,16 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_M_update_bbegin();
}
+ void
+ _M_update_bbegin(__detail::_Hash_node_base* __n)
+ { _M_update_bbegin(static_cast<__node_ptr>(__n)); }
+
bool
_M_uses_single_bucket(__buckets_ptr __bkts) const
- { return __builtin_expect(__bkts == &_M_single_bucket, false); }
+ {
+ return __builtin_expect(
+ std::__to_address(__bkts) == &_M_single_bucket, false);
+ }
bool
_M_uses_single_bucket() const
@@ -419,7 +427,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
if (__builtin_expect(__bkt_count == 1, false))
{
_M_single_bucket = nullptr;
- return &_M_single_bucket;
+ return __buckets_ptr_traits::pointer_to(_M_single_bucket);
}
return __hashtable_alloc::_M_allocate_buckets(__bkt_count);
@@ -440,12 +448,15 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// Gets bucket begin, deals with the fact that non-empty buckets contain
// their before begin node.
- __node_ptr
+ __node_type*
_M_bucket_begin(size_type __bkt) const;
- __node_ptr
+ __node_type*
_M_begin() const
- { return static_cast<__node_ptr>(_M_before_begin._M_nxt); }
+ {
+ return
+ static_cast<__node_type*>(std::__to_address(_M_before_begin._M_nxt));
+ }
// Assign *this using another _Hashtable instance. Whether elements
// are copied or moved depends on the _Ht reference.
@@ -492,6 +503,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
const _Hash&, const _Equal&, const allocator_type&,
false_type __uks);
+ static __node_ptr
+ _S_cast(__node_ptr __n)
+ { return __n; }
+
+ static __node_ptr
+ _S_cast(__detail::_Hash_node_base* __n)
+ { return static_cast<__node_ptr>(__n); }
+
public:
// Constructor, destructor, assignment, swap
_Hashtable() = default;
@@ -568,7 +587,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_Hashtable&
operator=(initializer_list<value_type> __l)
{
- __reuse_or_alloc_node_gen_t __roan(_M_begin(), *this);
+ __reuse_or_alloc_node_gen_t __roan(std::move(_M_before_begin._M_nxt),
+ *this);
_M_before_begin._M_nxt = nullptr;
clear();
@@ -736,16 +756,17 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// Find and insert helper functions and types
// Find the node before the one matching the criteria.
- __node_base_ptr
+ __node_base*
_M_find_before_node(size_type, const key_type&, __hash_code) const;
- __node_ptr
+ __node_type*
_M_find_node(size_type __bkt, const key_type& __key,
__hash_code __c) const
{
- __node_base_ptr __before_n = _M_find_before_node(__bkt, __key, __c);
+ __node_base* __before_n = _M_find_before_node(__bkt, __key, __c);
if (__before_n)
- return static_cast<__node_ptr>(__before_n->_M_nxt);
+ return static_cast<__node_type*>(
+ std::__to_address(__before_n->_M_nxt));
return nullptr;
}
@@ -759,8 +780,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
size_type __next_bkt);
// Get the node before __n in the bucket __bkt
- __node_base_ptr
- _M_get_previous_node(size_type __bkt, __node_ptr __n);
+ __node_base*
+ _M_get_previous_node(size_type __bkt, __node_type* __n);
// Insert node __n with hash code __code, in bucket __bkt if no
// rehash (assumes no element with same key already present).
@@ -772,7 +793,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// Insert node __n with key __k and hash code __code.
// Takes ownership of __n if insertion succeeds, throws otherwise.
iterator
- _M_insert_multi_node(__node_ptr __hint,
+ _M_insert_multi_node(__node_type* __hint,
__hash_code __code, __node_ptr __n);
template<typename... _Args>
@@ -830,7 +851,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_M_erase(false_type __uks, const key_type&);
iterator
- _M_erase(size_type __bkt, __node_base_ptr __prev_n, __node_ptr __n);
+ _M_erase(size_type __bkt, __node_base* __prev_n, __node_ptr __n);
public:
// Emplace
@@ -890,7 +911,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
const key_type& __k = __nh._M_key();
__hash_code __code = this->_M_hash_code(__k);
size_type __bkt = _M_bucket_index(__code);
- if (__node_ptr __n = _M_find_node(__bkt, __k, __code))
+ if (__node_type* __n = _M_find_node(__bkt, __k, __code))
{
__ret.node = std::move(__nh);
__ret.position = iterator(__n);
@@ -926,11 +947,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
private:
node_type
- _M_extract_node(size_t __bkt, __node_base_ptr __prev_n)
+ _M_extract_node(size_t __bkt, __node_base* __prev_n)
{
- __node_ptr __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
+ __node_ptr __n = _S_cast(__prev_n->_M_nxt);
if (__prev_n == _M_buckets[__bkt])
- _M_remove_bucket_begin(__bkt, __n->_M_next(),
+ _M_remove_bucket_begin(__bkt, __n->_M_next_ptr(),
__n->_M_nxt ? _M_bucket_index(*__n->_M_next()) : 0);
else if (__n->_M_nxt)
{
@@ -962,7 +983,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
node_type __nh;
__hash_code __code = this->_M_hash_code(__k);
std::size_t __bkt = _M_bucket_index(__code);
- if (__node_base_ptr __prev_node = _M_find_before_node(__bkt, __k, __code))
+ if (auto __prev_node = _M_find_before_node(__bkt, __k, __code))
__nh = _M_extract_node(__bkt, __prev_node);
return __nh;
}
@@ -1032,10 +1053,12 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
_M_bucket_begin(size_type __bkt) const
- -> __node_ptr
+ -> __node_type*
{
- __node_base_ptr __n = _M_buckets[__bkt];
- return __n ? static_cast<__node_ptr>(__n->_M_nxt) : nullptr;
+ __node_base* __n = _M_buckets[__bkt];
+ return __n
+ ? static_cast<__node_type*>(std::__to_address(__n->_M_nxt))
+ : nullptr;
}
template<typename _Key, typename _Value, typename _Alloc,
@@ -1123,7 +1146,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
&& __this_alloc != __that_alloc)
{
// Replacement allocator cannot free existing storage.
- this->_M_deallocate_nodes(_M_begin());
+ this->_M_deallocate_nodes(_S_cast(_M_before_begin._M_nxt));
_M_before_begin._M_nxt = nullptr;
_M_deallocate_buckets();
_M_buckets = nullptr;
@@ -1175,15 +1198,16 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_M_bucket_count = __ht._M_bucket_count;
}
else
- __builtin_memset(_M_buckets, 0,
- _M_bucket_count * sizeof(__node_base_ptr));
+ __builtin_memset(std::__to_address(_M_buckets), 0,
+ _M_bucket_count * sizeof(__node_base*));
__try
{
__hashtable_base::operator=(std::forward<_Ht>(__ht));
_M_element_count = __ht._M_element_count;
_M_rehash_policy = __ht._M_rehash_policy;
- __reuse_or_alloc_node_gen_t __roan(_M_begin(), *this);
+ __reuse_or_alloc_node_gen_t
+ __roan(std::move(_M_before_begin._M_nxt), *this);
_M_before_begin._M_nxt = nullptr;
_M_assign(std::forward<_Ht>(__ht), __roan);
if (__former_buckets)
@@ -1199,8 +1223,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_M_buckets = __former_buckets;
_M_bucket_count = __former_bucket_count;
}
- __builtin_memset(_M_buckets, 0,
- _M_bucket_count * sizeof(__node_base_ptr));
+ __builtin_memset(std::__to_address(_M_buckets), 0,
+ _M_bucket_count * sizeof(__node_base*));
__throw_exception_again;
}
}
@@ -1226,14 +1250,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// First deal with the special first node pointed to by
// _M_before_begin.
- __node_ptr __ht_n = __ht._M_begin();
+ __node_type* __ht_n = __ht._M_begin();
__node_ptr __this_n
= __node_gen(__fwd_value_for<_Ht>(__ht_n->_M_v()));
this->_M_copy_code(*__this_n, *__ht_n);
_M_update_bbegin(__this_n);
// Then deal with other nodes.
- __node_ptr __prev_n = __this_n;
+ __node_base* __prev_n = std::__to_address(__this_n);
for (__ht_n = __ht_n->_M_next(); __ht_n; __ht_n = __ht_n->_M_next())
{
__this_n = __node_gen(__fwd_value_for<_Ht>(__ht_n->_M_v()));
@@ -1242,7 +1266,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
size_type __bkt = _M_bucket_index(*__this_n);
if (!_M_buckets[__bkt])
_M_buckets[__bkt] = __prev_n;
- __prev_n = __this_n;
+ __prev_n = std::__to_address(__this_n);
}
}
__catch(...)
@@ -1266,7 +1290,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_M_rehash_policy._M_reset();
_M_bucket_count = 1;
_M_single_bucket = nullptr;
- _M_buckets = &_M_single_bucket;
+ _M_buckets = __buckets_ptr_traits::pointer_to(_M_single_bucket);
_M_before_begin._M_nxt = nullptr;
_M_element_count = 0;
}
@@ -1283,7 +1307,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
if (__builtin_expect(std::__addressof(__ht) == this, false))
return;
- this->_M_deallocate_nodes(_M_begin());
+ this->_M_deallocate_nodes(_S_cast(_M_before_begin._M_nxt));
_M_deallocate_buckets();
__hashtable_base::operator=(std::move(__ht));
_M_rehash_policy = __ht._M_rehash_policy;
@@ -1291,7 +1315,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_M_buckets = __ht._M_buckets;
else
{
- _M_buckets = &_M_single_bucket;
+ _M_buckets = __buckets_ptr_traits::pointer_to(_M_single_bucket);
_M_single_bucket = __ht._M_single_bucket;
}
@@ -1368,7 +1392,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// Update buckets if __ht is using its single bucket.
if (__ht._M_uses_single_bucket())
{
- _M_buckets = &_M_single_bucket;
+ _M_buckets = __buckets_ptr_traits::pointer_to(_M_single_bucket);
_M_single_bucket = __ht._M_single_bucket;
}
@@ -1419,7 +1443,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
{
if (__ht._M_uses_single_bucket())
{
- _M_buckets = &_M_single_bucket;
+ _M_buckets = __buckets_ptr_traits::pointer_to(_M_single_bucket);
_M_single_bucket = __ht._M_single_bucket;
}
else
@@ -1427,7 +1451,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// Fix bucket containing the _M_before_begin pointer that can't be
// moved.
- _M_update_bbegin(__ht._M_begin());
+ _M_update_bbegin(__ht._M_before_begin._M_nxt);
__ht._M_reset();
}
@@ -1480,13 +1504,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
if (!__x._M_uses_single_bucket())
{
_M_buckets = __x._M_buckets;
- __x._M_buckets = &__x._M_single_bucket;
+ __x._M_buckets =
+ __buckets_ptr_traits::pointer_to(__x._M_single_bucket);
}
}
else if (__x._M_uses_single_bucket())
{
__x._M_buckets = _M_buckets;
- _M_buckets = &_M_single_bucket;
+ _M_buckets = __buckets_ptr_traits::pointer_to(_M_single_bucket);
}
else
std::swap(_M_buckets, __x._M_buckets);
@@ -1626,13 +1651,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
_M_find_before_node(size_type __bkt, const key_type& __k,
__hash_code __code) const
- -> __node_base_ptr
+ -> __node_base*
{
- __node_base_ptr __prev_p = _M_buckets[__bkt];
+ __node_base* __prev_p = _M_buckets[__bkt];
if (!__prev_p)
return nullptr;
- for (__node_ptr __p = static_cast<__node_ptr>(__prev_p->_M_nxt);;
+ for (__node_type* __p =
+ static_cast<__node_type*>(std::__to_address(__prev_p->_M_nxt));;
__p = __p->_M_next())
{
if (this->_M_equals(__k, __code, *__p))
@@ -1673,7 +1699,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
if (__node->_M_nxt)
// We must update former begin bucket that is pointing to
// _M_before_begin.
- _M_buckets[_M_bucket_index(*__node->_M_next())] = __node;
+ _M_buckets[_M_bucket_index(*__node->_M_next())] =
+ std::__to_address(__node);
_M_buckets[__bkt] = &_M_before_begin;
}
@@ -1710,12 +1737,12 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
- _M_get_previous_node(size_type __bkt, __node_ptr __n)
- -> __node_base_ptr
+ _M_get_previous_node(size_type __bkt, __node_type* __n)
+ -> __node_base*
{
- __node_base_ptr __prev_n = _M_buckets[__bkt];
- while (__prev_n->_M_nxt != __n)
- __prev_n = __prev_n->_M_nxt;
+ __node_base* __prev_n = _M_buckets[__bkt];
+ while (std::__to_address(__prev_n->_M_nxt) != __n)
+ __prev_n = std::__to_address(__prev_n->_M_nxt);
return __prev_n;
}
@@ -1735,7 +1762,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
const key_type& __k = _ExtractKey{}(__node._M_node->_M_v());
__hash_code __code = this->_M_hash_code(__k);
size_type __bkt = _M_bucket_index(__code);
- if (__node_ptr __p = _M_find_node(__bkt, __k, __code))
+ if (__node_type* __p = _M_find_node(__bkt, __k, __code))
// There is already an equivalent node, no insertion
return std::make_pair(iterator(__p), false);
@@ -1795,7 +1822,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// Always insert at the beginning of the bucket.
_M_insert_bucket_begin(__bkt, __node);
++_M_element_count;
- return iterator(__node);
+ return iterator(std::__to_address(__node));
}
template<typename _Key, typename _Value, typename _Alloc,
@@ -1805,7 +1832,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
- _M_insert_multi_node(__node_ptr __hint,
+ _M_insert_multi_node(__node_type* __hint,
__hash_code __code, __node_ptr __node)
-> iterator
{
@@ -1822,7 +1849,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// Find the node before an equivalent one or use hint if it exists and
// if it is equivalent.
- __node_base_ptr __prev
+ __node_base* __prev
= __builtin_expect(__hint != nullptr, false)
&& this->_M_equals(__k, __code, *__hint)
? __hint
@@ -1841,7 +1868,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
{
size_type __next_bkt = _M_bucket_index(*__node->_M_next());
if (__next_bkt != __bkt)
- _M_buckets[__next_bkt] = __node;
+ _M_buckets[__next_bkt] = std::__to_address(__node);
}
}
else
@@ -1850,7 +1877,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// equivalent elements' relative positions.
_M_insert_bucket_begin(__bkt, __node);
++_M_element_count;
- return iterator(__node);
+ return iterator(std::__to_address(__node));
}
// Insert v if no element with its key is already present.
@@ -1870,8 +1897,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
__hash_code __code = this->_M_hash_code(__k);
size_type __bkt = _M_bucket_index(__code);
- if (__node_ptr __node = _M_find_node(__bkt, __k, __code))
- return { iterator(__node), false };
+ if (__node_type* __n = _M_find_node(__bkt, __k, __code))
+ return { iterator(__n), false };
_Scoped_node __node{ __node_gen(std::forward<_Arg>(__v)), this };
auto __pos
@@ -1916,14 +1943,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
erase(const_iterator __it)
-> iterator
{
- __node_ptr __n = __it._M_cur;
+ __node_type* __n = __it._M_cur;
std::size_t __bkt = _M_bucket_index(*__n);
// Look for previous node to unlink it from the erased one, this
// is why we need buckets to contain the before begin to make
// this search fast.
- __node_base_ptr __prev_n = _M_get_previous_node(__bkt, __n);
- return _M_erase(__bkt, __prev_n, __n);
+ __node_base* __prev_n = _M_get_previous_node(__bkt, __n);
+ return _M_erase(__bkt, __prev_n, _S_cast(__prev_n->_M_nxt));
}
template<typename _Key, typename _Value, typename _Alloc,
@@ -1933,11 +1960,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
- _M_erase(size_type __bkt, __node_base_ptr __prev_n, __node_ptr __n)
+ _M_erase(size_type __bkt, __node_base* __prev_n, __node_ptr __n)
-> iterator
{
if (__prev_n == _M_buckets[__bkt])
- _M_remove_bucket_begin(__bkt, __n->_M_next(),
+ _M_remove_bucket_begin(__bkt, __n->_M_next_ptr(),
__n->_M_nxt ? _M_bucket_index(*__n->_M_next()) : 0);
else if (__n->_M_nxt)
{
@@ -1968,12 +1995,12 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
std::size_t __bkt = _M_bucket_index(__code);
// Look for the node before the first matching node.
- __node_base_ptr __prev_n = _M_find_before_node(__bkt, __k, __code);
+ __node_base* __prev_n = _M_find_before_node(__bkt, __k, __code);
if (!__prev_n)
return 0;
// We found a matching node, erase it.
- __node_ptr __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
+ __node_ptr __n = _S_cast(__prev_n->_M_nxt);
_M_erase(__bkt, __prev_n, __n);
return 1;
}
@@ -1992,7 +2019,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
std::size_t __bkt = _M_bucket_index(__code);
// Look for the node before the first matching node.
- __node_base_ptr __prev_n = _M_find_before_node(__bkt, __k, __code);
+ __node_base* __prev_n = _M_find_before_node(__bkt, __k, __code);
if (!__prev_n)
return 0;
@@ -2002,8 +2029,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
// We use one loop to find all matching nodes and another to deallocate
// them so that the key stays valid during the first loop. It might be
// invalidated indirectly when destroying nodes.
- __node_ptr __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
- __node_ptr __n_last = __n->_M_next();
+ __node_ptr __n = _S_cast(__prev_n->_M_nxt);
+ __node_type* __n_last = __n->_M_next();
while (__n_last && this->_M_node_equals(*__n, *__n_last))
__n_last = __n_last->_M_next();
@@ -2013,19 +2040,19 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
size_type __result = 0;
do
{
- __node_ptr __p = __n->_M_next();
+ __node_ptr __p = __n->_M_next_ptr();
this->_M_deallocate_node(__n);
__n = __p;
++__result;
}
- while (__n != __n_last);
+ while (std::__to_address(__n) != __n_last);
_M_element_count -= __result;
if (__prev_n == _M_buckets[__bkt])
- _M_remove_bucket_begin(__bkt, __n_last, __n_last_bkt);
+ _M_remove_bucket_begin(__bkt, __n, __n_last_bkt);
else if (__n_last_bkt != __bkt)
_M_buckets[__n_last_bkt] = __prev_n;
- __prev_n->_M_nxt = __n_last;
+ __prev_n->_M_nxt = __n;
return __result;
}
@@ -2039,41 +2066,42 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
erase(const_iterator __first, const_iterator __last)
-> iterator
{
- __node_ptr __n = __first._M_cur;
- __node_ptr __last_n = __last._M_cur;
+ __node_type* __n = __first._M_cur;
+ __node_type* __last_n = __last._M_cur;
if (__n == __last_n)
return iterator(__n);
std::size_t __bkt = _M_bucket_index(*__n);
- __node_base_ptr __prev_n = _M_get_previous_node(__bkt, __n);
+ __node_base* __prev_n = _M_get_previous_node(__bkt, __n);
+ __node_ptr __nptr = _S_cast(__prev_n->_M_nxt);
bool __is_bucket_begin = __n == _M_bucket_begin(__bkt);
std::size_t __n_bkt = __bkt;
for (;;)
{
do
{
- __node_ptr __tmp = __n;
- __n = __n->_M_next();
+ __node_ptr __tmp = __nptr;
+ __nptr = __nptr->_M_next_ptr();
this->_M_deallocate_node(__tmp);
--_M_element_count;
- if (!__n)
+ if (!__nptr)
break;
- __n_bkt = _M_bucket_index(*__n);
+ __n_bkt = _M_bucket_index(*__nptr);
}
- while (__n != __last_n && __n_bkt == __bkt);
+ while (std::__to_address(__nptr) != __last_n && __n_bkt == __bkt);
if (__is_bucket_begin)
- _M_remove_bucket_begin(__bkt, __n, __n_bkt);
- if (__n == __last_n)
+ _M_remove_bucket_begin(__bkt, __nptr, __n_bkt);
+ if (std::__to_address(__nptr) == __last_n)
break;
__is_bucket_begin = true;
__bkt = __n_bkt;
}
- if (__n && (__n_bkt != __bkt || __is_bucket_begin))
+ if (__nptr && (__n_bkt != __bkt || __is_bucket_begin))
_M_buckets[__n_bkt] = __prev_n;
- __prev_n->_M_nxt = __n;
- return iterator(__n);
+ __prev_n->_M_nxt = __nptr;
+ return iterator(std::__to_address(__nptr));
}
template<typename _Key, typename _Value, typename _Alloc,
@@ -2085,9 +2113,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
clear() noexcept
{
- this->_M_deallocate_nodes(_M_begin());
- __builtin_memset(_M_buckets, 0,
- _M_bucket_count * sizeof(__node_base_ptr));
+ this->_M_deallocate_nodes(_S_cast(_M_before_begin._M_nxt));
+ __builtin_memset(std::__to_address(_M_buckets), 0,
+ _M_bucket_count * sizeof(__node_base*));
_M_element_count = 0;
_M_before_begin._M_nxt = nullptr;
}
@@ -2148,12 +2176,12 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_M_rehash_aux(size_type __bkt_count, true_type /* __uks */)
{
__buckets_ptr __new_buckets = _M_allocate_buckets(__bkt_count);
- __node_ptr __p = _M_begin();
+ __node_type* __p = _M_begin();
_M_before_begin._M_nxt = nullptr;
std::size_t __bbegin_bkt = 0;
while (__p)
{
- __node_ptr __next = __p->_M_next();
+ __node_type* __next = __p->_M_next();
std::size_t __bkt
= __hash_code_base::_M_bucket_index(*__p, __bkt_count);
if (!__new_buckets[__bkt])
@@ -2191,16 +2219,16 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
_M_rehash_aux(size_type __bkt_count, false_type /* __uks */)
{
__buckets_ptr __new_buckets = _M_allocate_buckets(__bkt_count);
- __node_ptr __p = _M_begin();
+ __node_type* __p = _M_begin();
_M_before_begin._M_nxt = nullptr;
std::size_t __bbegin_bkt = 0;
std::size_t __prev_bkt = 0;
- __node_ptr __prev_p = nullptr;
+ __node_type* __prev_p = nullptr;
bool __check_bucket = false;
while (__p)
{
- __node_ptr __next = __p->_M_next();
+ __node_type* __next = __p->_M_next();
std::size_t __bkt
= __hash_code_base::_M_bucket_index(*__p, __bkt_count);
diff --git a/libstdc++-v3/include/bits/hashtable_policy.h b/libstdc++-v3/include/bits/hashtable_policy.h
index 84961849fb4..ed85d4bb440 100644
--- a/libstdc++-v3/include/bits/hashtable_policy.h
+++ b/libstdc++-v3/include/bits/hashtable_policy.h
@@ -59,24 +59,29 @@ namespace __detail
// Helper function: return distance(first, last) for forward
// iterators, or 0/1 for input iterators.
- template<class _Iterator>
+ template<typename _Iterator>
inline typename std::iterator_traits<_Iterator>::difference_type
__distance_fw(_Iterator __first, _Iterator __last,
std::input_iterator_tag)
{ return __first != __last ? 1 : 0; }
- template<class _Iterator>
+ template<typename _Iterator>
inline typename std::iterator_traits<_Iterator>::difference_type
__distance_fw(_Iterator __first, _Iterator __last,
std::forward_iterator_tag)
{ return std::distance(__first, __last); }
- template<class _Iterator>
+ template<typename _Iterator>
inline typename std::iterator_traits<_Iterator>::difference_type
__distance_fw(_Iterator __first, _Iterator __last)
{ return __distance_fw(__first, __last,
std::__iterator_category(__first)); }
+ template<typename _Alloc, typename _Value>
+ using __alloc_val_ptr =
+ typename std::allocator_traits<__alloc_rebind<_Alloc,
+ _Value>>::pointer;
+
struct _Identity
{
template<typename _Tp>
@@ -94,6 +99,8 @@ namespace __detail
{ return std::get<0>(std::forward<_Tp>(__x)); }
};
+ struct _Hash_node_base;
+
template<typename _NodeAlloc>
struct _Hashtable_alloc;
@@ -107,24 +114,26 @@ namespace __detail
using __hashtable_alloc = _Hashtable_alloc<__node_alloc_type>;
using __node_alloc_traits =
typename __hashtable_alloc::__node_alloc_traits;
- using __node_type = typename __hashtable_alloc::__node_type;
+ using __node_ptr = typename __hashtable_alloc::__node_ptr;
public:
- _ReuseOrAllocNode(__node_type* __nodes, __hashtable_alloc& __h)
- : _M_nodes(__nodes), _M_h(__h) { }
+ _ReuseOrAllocNode(__node_ptr&& __nodes, __hashtable_alloc& __h)
+ : _M_nodes(std::move(__nodes)), _M_h(__h) { }
+ _ReuseOrAllocNode(_Hash_node_base* __nodes, __hashtable_alloc& __h)
+ : _M_nodes(static_cast<__node_ptr>(__nodes)), _M_h(__h) { }
_ReuseOrAllocNode(const _ReuseOrAllocNode&) = delete;
~_ReuseOrAllocNode()
{ _M_h._M_deallocate_nodes(_M_nodes); }
template<typename _Arg>
- __node_type*
+ __node_ptr
operator()(_Arg&& __arg) const
{
if (_M_nodes)
{
- __node_type* __node = _M_nodes;
- _M_nodes = _M_nodes->_M_next();
+ __node_ptr __node = _M_nodes;
+ _M_nodes = _M_nodes->_M_next_ptr();
__node->_M_nxt = nullptr;
auto& __a = _M_h._M_node_allocator();
__node_alloc_traits::destroy(__a, __node->_M_valptr());
@@ -144,7 +153,7 @@ namespace __detail
}
private:
- mutable __node_type* _M_nodes;
+ mutable __node_ptr _M_nodes;
__hashtable_alloc& _M_h;
};
@@ -155,14 +164,14 @@ namespace __detail
{
private:
using __hashtable_alloc = _Hashtable_alloc<_NodeAlloc>;
- using __node_type = typename __hashtable_alloc::__node_type;
+ using __node_ptr = typename __hashtable_alloc::__node_ptr;
public:
_AllocNode(__hashtable_alloc& __h)
: _M_h(__h) { }
template<typename _Arg>
- __node_type*
+ __node_ptr
operator()(_Arg&& __arg) const
{ return _M_h._M_allocate_node(std::forward<_Arg>(__arg)); }
@@ -220,6 +229,25 @@ namespace __detail
_Hash_node_base(_Hash_node_base* __next) noexcept : _M_nxt(__next) { }
};
+ /**
+ * struct _Hash_pnode_base
+ *
+ * Like _Hash_node_base but used in case of custom pointer type defined by the
+ * allocator.
+ */
+ template<typename _NodePtr>
+ struct _Hash_pnode_base
+ {
+ using __node_ptr = _NodePtr;
+
+ __node_ptr _M_nxt;
+
+ _Hash_pnode_base() noexcept : _M_nxt() { }
+
+ _Hash_pnode_base(__node_ptr __next) noexcept : _M_nxt(__next) { }
+ };
+
+
/**
* struct _Hash_node_value_base
*
@@ -251,18 +279,23 @@ namespace __detail
/**
* Primary template struct _Hash_node_code_cache.
+ *
+ * No cache.
*/
template<bool _Cache_hash_code>
struct _Hash_node_code_cache
{ };
/**
- * Specialization for node with cache, struct _Hash_node_code_cache.
+ * Specialization for node with cache.
*/
template<>
struct _Hash_node_code_cache<true>
{ std::size_t _M_hash_code; };
+ /**
+ * Node with value and optionally a cache for the hash code.
+ */
template<typename _Value, bool _Cache_hash_code>
struct _Hash_node_value
: _Hash_node_value_base<_Value>
@@ -270,28 +303,79 @@ namespace __detail
{ };
/**
- * Primary template struct _Hash_node.
+ * struct _Hash_node.
+ *
+ * The node definition when the allocator is using raw pointers.
*/
template<typename _Value, bool _Cache_hash_code>
struct _Hash_node
: _Hash_node_base
, _Hash_node_value<_Value, _Cache_hash_code>
{
+ using __node_base = _Hash_node_base;
+ using __node_ptr = _Hash_node*;
+ using __node_type = _Hash_node;
+ using __node_value_cache_type =
+ _Hash_node_value<_Value, _Cache_hash_code>;
+
_Hash_node*
_M_next() const noexcept
{ return static_cast<_Hash_node*>(this->_M_nxt); }
+
+ __node_ptr
+ _M_next_ptr() const noexcept
+ { return _M_next(); }
+ };
+
+ /**
+ * struct _Hash_pnode.
+ *
+ * The node definition used when the allocator define a custom pointer type.
+ */
+ template<typename _Ptr, bool _Cache_hash_code>
+ struct _Hash_pnode
+ : _Hash_pnode_base<__ptr_rebind<_Ptr,
+ _Hash_pnode<_Ptr, _Cache_hash_code>>>
+ , _Hash_node_value<typename std::pointer_traits<_Ptr>::element_type,
+ _Cache_hash_code>
+ {
+ using __node_base =
+ _Hash_pnode_base<__ptr_rebind<_Ptr,
+ _Hash_pnode<_Ptr, _Cache_hash_code>>>;
+ using __node_ptr = typename __node_base::__node_ptr;
+ using __node_type =
+ typename std::pointer_traits<__node_ptr>::element_type;
+ using value_type = typename __node_type::value_type;
+ using __node_value_cache_type =
+ _Hash_node_value<value_type, _Cache_hash_code>;
+ typedef typename std::pointer_traits<__node_ptr>::difference_type
+ difference_type;
+
+ __node_type*
+ _M_next() const noexcept
+ { return std::__to_address(this->_M_nxt); }
+
+ __node_ptr
+ _M_next_ptr() const noexcept
+ { return this->_M_nxt; }
};
+ template<typename _Alloc, typename _Value, bool __hash_cached>
+ using __get_node_type = typename std::conditional<
+ std::is_pointer<__alloc_val_ptr<_Alloc, _Value>>::value,
+ _Hash_node<_Value, __hash_cached>,
+ _Hash_pnode<__alloc_val_ptr<_Alloc, _Value>, __hash_cached>>::type;
+
/// Base class for node iterators.
- template<typename _Value, bool _Cache_hash_code>
- struct _Node_iterator_base
+ template<typename _NodeType>
+ struct _Hashtable_iterator_base
{
- using __node_type = _Hash_node<_Value, _Cache_hash_code>;
+ using __node_type = _NodeType;
__node_type* _M_cur;
- _Node_iterator_base() = default;
- _Node_iterator_base(__node_type* __p) noexcept
+ _Hashtable_iterator_base() = default;
+ _Hashtable_iterator_base(__node_type* __p) noexcept
: _M_cur(__p) { }
void
@@ -299,18 +383,32 @@ namespace __detail
{ _M_cur = _M_cur->_M_next(); }
friend bool
- operator==(const _Node_iterator_base& __x, const _Node_iterator_base& __y)
- noexcept
+ operator==(const _Hashtable_iterator_base& __x,
+ const _Hashtable_iterator_base& __y) noexcept
{ return __x._M_cur == __y._M_cur; }
#if __cpp_impl_three_way_comparison < 201907L
friend bool
- operator!=(const _Node_iterator_base& __x, const _Node_iterator_base& __y)
- noexcept
+ operator!=(const _Hashtable_iterator_base& __x,
+ const _Hashtable_iterator_base& __y) noexcept
{ return __x._M_cur != __y._M_cur; }
#endif
};
+ /// Base class for node iterators.
+ template<typename _Value, bool _Cache_hash_code>
+ struct _Node_iterator_base
+ : _Hashtable_iterator_base<_Hash_node<_Value, _Cache_hash_code>>
+ {
+ using __base_type =
+ _Hashtable_iterator_base<_Hash_node<_Value, _Cache_hash_code>>;
+ using __node_type = typename __base_type::__node_type;
+
+ _Node_iterator_base() = default;
+ _Node_iterator_base(__node_type* __p) noexcept
+ : __base_type(__p) { }
+ };
+
/// Node iterators, used to iterate through all the hashtable.
template<typename _Value, bool __constant_iterators, bool __cache>
struct _Node_iterator
@@ -414,6 +512,110 @@ namespace __detail
}
};
+ /// Node iterators, used to iterate through all the hashtable.
+ template<typename _NodeType, bool __constant_iterators>
+ struct _Hashtable_iterator
+ : public _Hashtable_iterator_base<_NodeType>
+ {
+ private:
+ using __base_type = _Hashtable_iterator_base<_NodeType>;
+ using __node_type = typename __base_type::__node_type;
+
+ public:
+ typedef typename __node_type::value_type value_type;
+ typedef typename __node_type::difference_type difference_type;
+ typedef std::forward_iterator_tag iterator_category;
+
+ using pointer = typename std::conditional<__constant_iterators,
+ const value_type*, value_type*>::type;
+
+ using reference = typename std::conditional<__constant_iterators,
+ const value_type&, value_type&>::type;
+
+ _Hashtable_iterator() noexcept
+ : __base_type(nullptr) { }
+
+ explicit
+ _Hashtable_iterator(__node_type* __p) noexcept
+ : __base_type(__p) { }
+
+ reference
+ operator*() const noexcept
+ { return this->_M_cur->_M_v(); }
+
+ pointer
+ operator->() const noexcept
+ { return this->_M_cur->_M_valptr(); }
+
+ _Hashtable_iterator&
+ operator++() noexcept
+ {
+ this->_M_incr();
+ return *this;
+ }
+
+ _Hashtable_iterator
+ operator++(int) noexcept
+ {
+ _Hashtable_iterator __tmp(*this);
+ this->_M_incr();
+ return __tmp;
+ }
+ };
+
+ /// Node const_iterators, used to iterate through all the hashtable.
+ template<typename _NodeType, bool __constant_iterators>
+ struct _Hashtable_const_iterator
+ : public _Hashtable_iterator_base<_NodeType>
+ {
+ private:
+ using __base_type = _Hashtable_iterator_base<_NodeType>;
+ using __node_type = typename __base_type::__node_type;
+
+ public:
+ typedef typename __node_type::value_type value_type;
+ typedef typename __node_type::difference_type difference_type;
+ typedef std::forward_iterator_tag iterator_category;
+
+ typedef const value_type* pointer;
+ typedef const value_type& reference;
+
+ _Hashtable_const_iterator() noexcept
+ : __base_type(nullptr) { }
+
+ explicit
+ _Hashtable_const_iterator(__node_type* __p) noexcept
+ : __base_type(__p) { }
+
+ _Hashtable_const_iterator(
+ const _Hashtable_iterator<_NodeType,
+ __constant_iterators>& __x) noexcept
+ : __base_type(__x._M_cur) { }
+
+ reference
+ operator*() const noexcept
+ { return this->_M_cur->_M_v(); }
+
+ pointer
+ operator->() const noexcept
+ { return this->_M_cur->_M_valptr(); }
+
+ _Hashtable_const_iterator&
+ operator++() noexcept
+ {
+ this->_M_incr();
+ return *this;
+ }
+
+ _Hashtable_const_iterator
+ operator++(int) noexcept
+ {
+ _Hashtable_const_iterator __tmp(*this);
+ this->_M_incr();
+ return __tmp;
+ }
+ };
+
// Many of class template _Hashtable's template parameters are policy
// classes. These are defaults for the policies.
@@ -800,16 +1002,15 @@ namespace __detail
using __hash_cached = typename _Traits::__hash_cached;
using __constant_iterators = typename _Traits::__constant_iterators;
+ using __alloc_ptr = __alloc_val_ptr<_Alloc, _Value>;
- using __hashtable_alloc = _Hashtable_alloc<
- __alloc_rebind<_Alloc, _Hash_node<_Value,
- __hash_cached::value>>>;
+ using __node_type = __get_node_type<_Alloc, _Value, __hash_cached::value>;
+ using __node_alloc_type = __alloc_rebind<_Alloc, __node_type>;
using value_type = typename __hashtable_base::value_type;
using size_type = typename __hashtable_base::size_type;
using __unique_keys = typename _Traits::__unique_keys;
- using __node_alloc_type = typename __hashtable_alloc::__node_alloc_type;
using __node_gen_type = _AllocNode<__node_alloc_type>;
__hashtable&
@@ -827,11 +1028,19 @@ namespace __detail
const _NodeGetter&, false_type __uks);
public:
- using iterator = _Node_iterator<_Value, __constant_iterators::value,
- __hash_cached::value>;
-
- using const_iterator = _Node_const_iterator<_Value, __constant_iterators::value,
- __hash_cached::value>;
+ using iterator =
+ typename std::conditional<std::is_pointer<__alloc_ptr>::value,
+ _Node_iterator<_Value,
+ __constant_iterators::value, __hash_cached::value>,
+ _Hashtable_iterator<__node_type,
+ __constant_iterators::value>>::type;
+
+ using const_iterator =
+ typename std::conditional<std::is_pointer<__alloc_ptr>::value,
+ _Node_const_iterator<_Value,
+ __constant_iterators::value, __hash_cached::value>,
+ _Hashtable_const_iterator<__node_type,
+ __constant_iterators::value>>::type;
using __ireturn_type = typename std::conditional<__unique_keys::value,
std::pair<iterator, bool>,
@@ -1165,6 +1374,17 @@ namespace __detail
bool __cache_hash_code>
struct _Local_iterator_base;
+ /**
+ * Primary class template _Hashtable_local_iter_base.
+ *
+ * Base class for local iterators, used to iterate within a bucket
+ * but not between buckets.
+ */
+ template<typename _Key, typename _NodeType, typename _ExtractKey,
+ typename _Hash, typename _RangeHash, typename _Unused,
+ bool __cache_hash_code>
+ struct _Hashtable_local_iter_base;
+
/**
* Primary class template _Hash_code_base.
*
@@ -1307,6 +1527,47 @@ namespace __detail
_M_get_bucket() const { return _M_bucket; } // for debug mode
};
+ /// Partial specialization used when nodes contain a cached hash code.
+ template<typename _Key, typename _NodeType, typename _ExtractKey,
+ typename _Hash, typename _RangeHash, typename _Unused>
+ struct _Hashtable_local_iter_base<_Key, _NodeType, _ExtractKey,
+ _Hash, _RangeHash, _Unused, true>
+ : public _Hashtable_iterator_base<_NodeType>
+ {
+ protected:
+ using __base_node_iter = _Hashtable_iterator_base<_NodeType>;
+ using value_type = typename _NodeType::value_type;
+ using __hash_code_base = _Hash_code_base<_Key, value_type, _ExtractKey,
+ _Hash, _RangeHash, _Unused, true>;
+
+ _Hashtable_local_iter_base() = default;
+ _Hashtable_local_iter_base(const __hash_code_base&,
+ _NodeType* __p,
+ std::size_t __bkt, std::size_t __bkt_count)
+ : __base_node_iter(__p), _M_bucket(__bkt), _M_bucket_count(__bkt_count)
+ { }
+
+ void
+ _M_incr()
+ {
+ __base_node_iter::_M_incr();
+ if (this->_M_cur)
+ {
+ std::size_t __bkt
+ = _RangeHash{}(this->_M_cur->_M_hash_code, _M_bucket_count);
+ if (__bkt != _M_bucket)
+ this->_M_cur = nullptr;
+ }
+ }
+
+ std::size_t _M_bucket;
+ std::size_t _M_bucket_count;
+
+ public:
+ std::size_t
+ _M_get_bucket() const { return _M_bucket; } // for debug mode
+ };
+
// Uninitialized storage for a _Hash_code_base.
// This type is DefaultConstructible and Assignable even if the
// _Hash_code_base type isn't, so that _Local_iterator_base<..., false>
@@ -1421,6 +1682,84 @@ namespace __detail
_M_get_bucket() const { return _M_bucket; } // for debug mode
};
+ // Partial specialization used when hash codes are not cached
+ template<typename _Key, typename _NodeType, typename _ExtractKey,
+ typename _Hash, typename _RangeHash, typename _Unused>
+ struct _Hashtable_local_iter_base<_Key, _NodeType, _ExtractKey,
+ _Hash, _RangeHash, _Unused, false>
+ : _Hash_code_storage<_Hash>
+ , _Hashtable_iterator_base<_NodeType>
+ {
+ protected:
+ using value_type = typename _NodeType::value_type;
+ using __hash_code_base = _Hash_code_base<_Key, value_type, _ExtractKey,
+ _Hash, _RangeHash, _Unused, false>;
+ using __node_iter_base = _Hashtable_iterator_base<_NodeType>;
+
+ _Hashtable_local_iter_base() : _M_bucket_count(-1) { }
+
+ _Hashtable_local_iter_base(const __hash_code_base& __base,
+ _NodeType* __p,
+ std::size_t __bkt, std::size_t __bkt_count)
+ : __node_iter_base(__p), _M_bucket(__bkt), _M_bucket_count(__bkt_count)
+ { _M_init(__base.hash_function()); }
+
+ ~_Hashtable_local_iter_base()
+ {
+ if (_M_bucket_count != -1)
+ _M_destroy();
+ }
+
+ _Hashtable_local_iter_base(const _Hashtable_local_iter_base& __iter)
+ : __node_iter_base(__iter._M_cur), _M_bucket(__iter._M_bucket)
+ , _M_bucket_count(__iter._M_bucket_count)
+ {
+ if (_M_bucket_count != -1)
+ _M_init(*__iter._M_h());
+ }
+
+ _Hashtable_local_iter_base&
+ operator=(const _Hashtable_local_iter_base& __iter)
+ {
+ if (_M_bucket_count != -1)
+ _M_destroy();
+ this->_M_cur = __iter._M_cur;
+ _M_bucket = __iter._M_bucket;
+ _M_bucket_count = __iter._M_bucket_count;
+ if (_M_bucket_count != -1)
+ _M_init(*__iter._M_h());
+ return *this;
+ }
+
+ void
+ _M_incr()
+ {
+ __node_iter_base::_M_incr();
+ if (this->_M_cur)
+ {
+ std::size_t __bkt =
+ _RangeHash{}((*this->_M_h())(_ExtractKey{}(this->_M_cur->_M_v())),
+ _M_bucket_count);
+ if (__bkt != _M_bucket)
+ this->_M_cur = nullptr;
+ }
+ }
+
+ std::size_t _M_bucket;
+ std::size_t _M_bucket_count;
+
+ void
+ _M_init(const _Hash& __h)
+ { ::new(this->_M_h()) _Hash(__h); }
+
+ void
+ _M_destroy() { this->_M_h()->~_Hash(); }
+
+ public:
+ std::size_t
+ _M_get_bucket() const { return _M_bucket; } // for debug mode
+ };
+
/// local iterators
template<typename _Key, typename _Value, typename _ExtractKey,
typename _Hash, typename _RangeHash, typename _Unused,
@@ -1536,6 +1875,156 @@ namespace __detail
}
};
+ /// local iterators
+ template<typename _Key, typename _NodeType, typename _ExtractKey,
+ typename _Hash, typename _RangeHash, typename _Unused,
+ bool __constant_iterators, bool __cache>
+ struct _Hashtable_local_iterator
+ : public _Hashtable_local_iter_base<_Key, _NodeType, _ExtractKey,
+ _Hash, _RangeHash, _Unused, __cache>
+ {
+ private:
+ using __base_type =
+ _Hashtable_local_iter_base<_Key, _NodeType, _ExtractKey,
+ _Hash, _RangeHash, _Unused, __cache>;
+ using __hash_code_base = typename __base_type::__hash_code_base;
+ using __node_type = typename __base_type::__node_type;
+
+ public:
+ typedef typename __base_type::value_type value_type;
+ typedef typename std::conditional<__constant_iterators,
+ const value_type*, value_type*>::type
+ pointer;
+ typedef typename std::conditional<__constant_iterators,
+ const value_type&, value_type&>::type
+ reference;
+ typedef typename __node_type::difference_type difference_type;
+ typedef std::forward_iterator_tag iterator_category;
+
+ _Hashtable_local_iterator() = default;
+
+ _Hashtable_local_iterator(const __hash_code_base& __base,
+ __node_type* __n,
+ std::size_t __bkt, std::size_t __bkt_count)
+ : __base_type(__base, __n, __bkt, __bkt_count)
+ { }
+
+ reference
+ operator*() const
+ { return this->_M_cur->_M_v(); }
+
+ pointer
+ operator->() const
+ { return this->_M_cur->_M_valptr(); }
+
+ _Hashtable_local_iterator&
+ operator++()
+ {
+ this->_M_incr();
+ return *this;
+ }
+
+ _Hashtable_local_iterator
+ operator++(int)
+ {
+ _Hashtable_local_iterator __tmp(*this);
+ this->_M_incr();
+ return __tmp;
+ }
+ };
+
+ /// local const_iterators
+ template<typename _Key, typename _NodeType, typename _ExtractKey,
+ typename _Hash, typename _RangeHash, typename _Unused,
+ bool __constant_iterators, bool __cache>
+ struct _Hashtable_const_local_iterator
+ : public _Hashtable_local_iter_base<_Key, _NodeType, _ExtractKey,
+ _Hash, _RangeHash, _Unused, __cache>
+ {
+ private:
+ using __base_type =
+ _Hashtable_local_iter_base<_Key, _NodeType, _ExtractKey,
+ _Hash, _RangeHash, _Unused, __cache>;
+ using __hash_code_base = typename __base_type::__hash_code_base;
+ using __node_type = typename __base_type::__node_type;
+
+ public:
+ typedef typename __base_type::value_type value_type;
+ typedef const value_type* pointer;
+ typedef const value_type& reference;
+ typedef typename __node_type::difference_type difference_type;
+ typedef std::forward_iterator_tag iterator_category;
+
+ _Hashtable_const_local_iterator() = default;
+
+ _Hashtable_const_local_iterator(const __hash_code_base& __base,
+ __node_type* __n,
+ std::size_t __bkt, std::size_t __bkt_count)
+ : __base_type(__base, __n, __bkt, __bkt_count)
+ { }
+
+ _Hashtable_const_local_iterator(const _Hashtable_local_iterator<
+ _Key, _NodeType, _ExtractKey,
+ _Hash, _RangeHash, _Unused,
+ __constant_iterators,
+ __cache>& __x)
+ : __base_type(__x)
+ { }
+
+ reference
+ operator*() const
+ { return this->_M_cur->_M_v(); }
+
+ pointer
+ operator->() const
+ { return this->_M_cur->_M_valptr(); }
+
+ _Hashtable_const_local_iterator&
+ operator++()
+ {
+ this->_M_incr();
+ return *this;
+ }
+
+ _Hashtable_const_local_iterator
+ operator++(int)
+ {
+ _Hashtable_const_local_iterator __tmp(*this);
+ this->_M_incr();
+ return __tmp;
+ }
+ };
+
+ template<typename _NodePtr, typename _Key, typename _Value,
+ typename _ExtractKey, typename _Hash,
+ typename _RangeHash, typename _Unused,
+ bool __constant_iterators, bool __hash_cached>
+ using __local_iterator = typename std::conditional<
+ std::is_pointer<_NodePtr>::value,
+ _Local_iterator<_Key, _Value,
+ _ExtractKey, _Hash, _RangeHash, _Unused,
+ __constant_iterators, __hash_cached>,
+ _Hashtable_local_iterator<_Key,
+ typename std::pointer_traits<_NodePtr>::element_type,
+ _ExtractKey, _Hash, _RangeHash, _Unused,
+ __constant_iterators,
+ __hash_cached>>::type;
+
+ template<typename _NodePtr, typename _Key, typename _Value,
+ typename _ExtractKey, typename _Hash,
+ typename _RangeHash, typename _Unused,
+ bool __constant_iterators, bool __hash_cached>
+ using __const_local_iterator = typename std::conditional<
+ std::is_pointer<_NodePtr>::value,
+ _Local_const_iterator<_Key, _Value,
+ _ExtractKey, _Hash, _RangeHash, _Unused,
+ __constant_iterators, __hash_cached>,
+ _Hashtable_const_local_iterator<_Key,
+ typename std::pointer_traits<_NodePtr>::element_type,
+ _ExtractKey, _Hash, _RangeHash, _Unused,
+ __constant_iterators,
+ __hash_cached>>::type;
+
/**
* Primary class template _Hashtable_base.
*
@@ -1679,8 +2168,9 @@ namespace __detail
if (!__prev_n)
return false;
- for (__node_type* __n = static_cast<__node_type*>(__prev_n->_M_nxt);;
- __n = __n->_M_next())
+ __node_type* __n =
+ static_cast<__node_type*>(std::__to_address(__prev_n->_M_nxt));
+ for (;; __n = __n->_M_next())
{
if (__n->_M_v() == *__itx)
break;
@@ -1739,7 +2229,8 @@ namespace __detail
if (!__y_prev_n)
return false;
- __node_type* __y_n = static_cast<__node_type*>(__y_prev_n->_M_nxt);
+ __node_type* __y_n =
+ static_cast<__node_type*>(std::__to_address(__y_prev_n->_M_nxt));
for (;;)
{
if (__this->key_eq()(_ExtractKey{}(__y_n->_M_v()),
@@ -1786,16 +2277,12 @@ namespace __detail
// Use __gnu_cxx to benefit from _S_always_equal and al.
using __node_alloc_traits = __gnu_cxx::__alloc_traits<__node_alloc_type>;
- using __value_alloc_traits = typename __node_alloc_traits::template
- rebind_traits<typename __node_type::value_type>;
-
- using __node_ptr = __node_type*;
- using __node_base = _Hash_node_base;
- using __node_base_ptr = __node_base*;
+ using __node_ptr = typename __node_alloc_traits::pointer;
+ using __node_base = typename __node_type::__node_base;
using __buckets_alloc_type =
- __alloc_rebind<__node_alloc_type, __node_base_ptr>;
+ __alloc_rebind<__node_alloc_type, __node_base*>;
using __buckets_alloc_traits = std::allocator_traits<__buckets_alloc_type>;
- using __buckets_ptr = __node_base_ptr*;
+ using __buckets_ptr = typename __buckets_alloc_traits::pointer;
_Hashtable_alloc() = default;
_Hashtable_alloc(const _Hashtable_alloc&) = default;
@@ -1848,14 +2335,13 @@ namespace __detail
-> __node_ptr
{
auto __nptr = __node_alloc_traits::allocate(_M_node_allocator(), 1);
- __node_ptr __n = std::__to_address(__nptr);
__try
{
- ::new ((void*)__n) __node_type;
+ ::new ((void*)std::__to_address(__nptr)) __node_type;
__node_alloc_traits::construct(_M_node_allocator(),
- __n->_M_valptr(),
+ __nptr->_M_valptr(),
std::forward<_Args>(__args)...);
- return __n;
+ return __nptr;
}
__catch(...)
{
@@ -1866,20 +2352,18 @@ namespace __detail
template<typename _NodeAlloc>
void
- _Hashtable_alloc<_NodeAlloc>::_M_deallocate_node(__node_ptr __n)
+ _Hashtable_alloc<_NodeAlloc>::_M_deallocate_node(__node_ptr __nptr)
{
- __node_alloc_traits::destroy(_M_node_allocator(), __n->_M_valptr());
- _M_deallocate_node_ptr(__n);
+ __node_alloc_traits::destroy(_M_node_allocator(), __nptr->_M_valptr());
+ _M_deallocate_node_ptr(__nptr);
}
template<typename _NodeAlloc>
void
- _Hashtable_alloc<_NodeAlloc>::_M_deallocate_node_ptr(__node_ptr __n)
+ _Hashtable_alloc<_NodeAlloc>::_M_deallocate_node_ptr(__node_ptr __nptr)
{
- typedef typename __node_alloc_traits::pointer _Ptr;
- auto __ptr = std::pointer_traits<_Ptr>::pointer_to(*__n);
- __n->~__node_type();
- __node_alloc_traits::deallocate(_M_node_allocator(), __ptr, 1);
+ __nptr->~__node_type();
+ __node_alloc_traits::deallocate(_M_node_allocator(), __nptr, 1);
}
template<typename _NodeAlloc>
@@ -1889,7 +2373,7 @@ namespace __detail
while (__n)
{
__node_ptr __tmp = __n;
- __n = __n->_M_next();
+ __n = __n->_M_next_ptr();
_M_deallocate_node(__tmp);
}
}
@@ -1902,9 +2386,9 @@ namespace __detail
__buckets_alloc_type __alloc(_M_node_allocator());
auto __ptr = __buckets_alloc_traits::allocate(__alloc, __bkt_count);
- __buckets_ptr __p = std::__to_address(__ptr);
- __builtin_memset(__p, 0, __bkt_count * sizeof(__node_base_ptr));
- return __p;
+ __builtin_memset(std::__to_address(__ptr), 0,
+ __bkt_count * sizeof(__node_base*));
+ return __ptr;
}
template<typename _NodeAlloc>
@@ -1913,10 +2397,8 @@ namespace __detail
_M_deallocate_buckets(__buckets_ptr __bkts,
std::size_t __bkt_count)
{
- typedef typename __buckets_alloc_traits::pointer _Ptr;
- auto __ptr = std::pointer_traits<_Ptr>::pointer_to(*__bkts);
__buckets_alloc_type __alloc(_M_node_allocator());
- __buckets_alloc_traits::deallocate(__alloc, __ptr, __bkt_count);
+ __buckets_alloc_traits::deallocate(__alloc, __bkts, __bkt_count);
}
//@} hashtable-detail
diff --git a/libstdc++-v3/testsuite/23_containers/unordered_map/allocator/ext_ptr.cc b/libstdc++-v3/testsuite/23_containers/unordered_map/allocator/ext_ptr.cc
new file mode 100644
index 00000000000..5e9ff548032
--- /dev/null
+++ b/libstdc++-v3/testsuite/23_containers/unordered_map/allocator/ext_ptr.cc
@@ -0,0 +1,57 @@
+// Copyright (C) 2021 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library. This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3. If not see
+// <http://www.gnu.org/licenses/>.
+
+// { dg-do run { target c++11 } }
+
+#include <unordered_map>
+#include <memory>
+#include <testsuite_hooks.h>
+#include <testsuite_allocator.h>
+
+struct T { int i; };
+
+bool operator==(const T& l, const T& r)
+{ return l.i == r.i; }
+
+struct H
+{
+ std::size_t
+ operator()(const T& t) const noexcept
+ { return t.i; }
+};
+
+struct E : std::equal_to<T>
+{ };
+
+using __gnu_test::CustomPointerAlloc;
+
+template class std::unordered_map<T, int, H, E,
+ CustomPointerAlloc<std::pair<const T, int>>>;
+
+void test01()
+{
+ typedef CustomPointerAlloc<std::pair<const T, int>> alloc_type;
+ typedef std::unordered_map<T, int, H, E, alloc_type> test_type;
+ test_type v;
+ v.insert({ T(), 0 });
+ VERIFY( ++v.begin() == v.end() );
+}
+
+int main()
+{
+ test01();
+}
diff --git a/libstdc++-v3/testsuite/23_containers/unordered_multimap/allocator/ext_ptr.cc b/libstdc++-v3/testsuite/23_containers/unordered_multimap/allocator/ext_ptr.cc
new file mode 100644
index 00000000000..6dd62a40293
--- /dev/null
+++ b/libstdc++-v3/testsuite/23_containers/unordered_multimap/allocator/ext_ptr.cc
@@ -0,0 +1,57 @@
+// Copyright (C) 2021 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library. This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3. If not see
+// <http://www.gnu.org/licenses/>.
+
+// { dg-do run { target c++11 } }
+
+#include <unordered_map>
+#include <memory>
+#include <testsuite_hooks.h>
+#include <testsuite_allocator.h>
+
+struct T { int i; };
+
+bool operator==(const T& l, const T& r)
+{ return l.i == r.i; }
+
+struct H
+{
+ std::size_t
+ operator()(const T& t) const noexcept
+ { return t.i; }
+};
+
+struct E : std::equal_to<T>
+{ };
+
+using __gnu_test::CustomPointerAlloc;
+
+template class std::unordered_multimap<T, int, H, E,
+ CustomPointerAlloc<std::pair<const T, int>>>;
+
+void test01()
+{
+ typedef CustomPointerAlloc<std::pair<const T, int>> alloc_type;
+ typedef std::unordered_multimap<T, int, H, E, alloc_type> test_type;
+ test_type v;
+ v.insert({ T(), 0 });
+ VERIFY( ++v.begin() == v.end() );
+}
+
+int main()
+{
+ test01();
+}
diff --git a/libstdc++-v3/testsuite/23_containers/unordered_multiset/allocator/ext_ptr.cc b/libstdc++-v3/testsuite/23_containers/unordered_multiset/allocator/ext_ptr.cc
new file mode 100644
index 00000000000..dbc7b6247a2
--- /dev/null
+++ b/libstdc++-v3/testsuite/23_containers/unordered_multiset/allocator/ext_ptr.cc
@@ -0,0 +1,56 @@
+// Copyright (C) 2021 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library. This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3. If not see
+// <http://www.gnu.org/licenses/>.
+
+// { dg-do run { target c++11 } }
+
+#include <unordered_set>
+#include <memory>
+#include <testsuite_hooks.h>
+#include <testsuite_allocator.h>
+
+struct T { int i; };
+
+bool operator==(const T& l, const T& r)
+{ return l.i == r.i; }
+
+struct H
+{
+ std::size_t
+ operator()(const T& t) const noexcept
+ { return t.i; }
+};
+
+struct E : std::equal_to<T>
+{ };
+
+using __gnu_test::CustomPointerAlloc;
+
+template class std::unordered_multiset<T, H, E, CustomPointerAlloc<T>>;
+
+void test01()
+{
+ typedef CustomPointerAlloc<T> alloc_type;
+ typedef std::unordered_multiset<T, H, E, alloc_type> test_type;
+ test_type v;
+ v.insert(T());
+ VERIFY( ++v.begin() == v.end() );
+}
+
+int main()
+{
+ test01();
+}
diff --git a/libstdc++-v3/testsuite/23_containers/unordered_set/allocator/ext_ptr.cc b/libstdc++-v3/testsuite/23_containers/unordered_set/allocator/ext_ptr.cc
index c0e6a1f53a2..88814b3009c 100644
--- a/libstdc++-v3/testsuite/23_containers/unordered_set/allocator/ext_ptr.cc
+++ b/libstdc++-v3/testsuite/23_containers/unordered_set/allocator/ext_ptr.cc
@@ -15,10 +15,7 @@
// with this library; see the file COPYING3. If not see
// <http://www.gnu.org/licenses/>.
-// This test fails to compile since C++17 (see xfail-if below) so we can only
-// do a "run" test for C++11 and C++14, and a "compile" test for C++17 and up.
-// { dg-do run { target { c++11_only || c++14_only } } }
-// { dg-do compile { target c++17 } }
+// { dg-do run { target { c++11 } } }
#include <unordered_set>
#include <memory>
@@ -26,15 +23,22 @@
#include <testsuite_allocator.h>
struct T { int i; };
-bool operator==(const T& l, const T& r) { return l.i == r.i; }
-struct H { std::size_t operator()(const T& t) const noexcept { return t.i; }
+
+bool operator==(const T& l, const T& r)
+{ return l.i == r.i; }
+
+struct H
+{
+ std::size_t
+ operator()(const T& t) const noexcept
+ { return t.i; }
};
-struct E : std::equal_to<T> { };
+
+struct E : std::equal_to<T>
+{ };
using __gnu_test::CustomPointerAlloc;
-// { dg-xfail-if "node reinsertion assumes raw pointers" { c++17 } }
-// TODO when removing this xfail change the test back to "dg-do run".
template class std::unordered_set<T, H, E, CustomPointerAlloc<T>>;
void test01()
next prev parent reply other threads:[~2021-01-11 18:10 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-19 17:31 François Dumont
2020-05-15 21:12 ` François Dumont
2020-09-28 20:37 ` François Dumont
2020-10-20 11:04 ` Jonathan Wakely
2020-10-20 17:26 ` François Dumont
2020-11-01 21:48 ` François Dumont
2020-11-02 14:11 ` Jonathan Wakely
2020-11-02 21:33 ` François Dumont
2021-01-11 18:10 ` François Dumont [this message]
2021-06-10 17:22 ` François Dumont
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=636a415e-84c6-64f9-a408-638909c82a4f@gmail.com \
--to=frs.dumont@gmail.com \
--cc=gcc-patches@gcc.gnu.org \
--cc=jwakely@redhat.com \
--cc=libstdc++@gcc.gnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).