From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com [IPv6:2a00:1450:4864:20::630]) by sourceware.org (Postfix) with ESMTPS id CC7843865C2D; Thu, 10 Jun 2021 17:22:05 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org CC7843865C2D Received: by mail-ej1-x630.google.com with SMTP id c10so352333eja.11; Thu, 10 Jun 2021 10:22:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language; bh=FhOu0aYN9VmlzM+2CJpfAJhUwmM2M3SXQA7z58paBUQ=; b=h/OgBme3LcKp6wna8DtPIkFC+BesIydKzU1egN/vtXCu05lShegR+vH2CL3cjUyCKc aP3svyeN9oEbG2KyKxCTgoiWj7uIetsEDg5Zd9cYzQdXFomyBLpcg6gISCInJsGB4+Ir 8cwb1Beuh5T7dEoXcsFQ7aAVHYuIyXJE2caumu1D/uRYM7tJpyN7s6QAu31iYZ6DHeyJ u60S9WWYvPOI30fcgYwkuzLcoFw3vjPKkI5n2tyJg0avv4xf9+o/AIqSNhCV6xawvAdD QCGj1EHW3bcTZAtdu/E0SgcLwLLzpiHhczSz7ivP33MZu51WDDZQriz0GwpiCoe5yGxq lGfQ== X-Gm-Message-State: AOAM531XGLCn0XiFYsvsY7E/6HoQEYgodh2iEPJmr4El5Z9ysi15X4ZV /iQBIPiwMLAlWZ8RyfLElMgbVEItXHUEag== X-Google-Smtp-Source: ABdhPJyRLvoOa2mhrRo6h/LZZ/xdb31COtcYT2EWte/4TRkmxMyICYcPSM/6WT7RvaAyMITyfY7+Vg== X-Received: by 2002:a17:906:6156:: with SMTP id p22mr666076ejl.242.1623345724641; Thu, 10 Jun 2021 10:22:04 -0700 (PDT) Received: from [10.13.4.209] ([109.190.253.11]) by smtp.googlemail.com with ESMTPSA id qq26sm1269066ejb.6.2021.06.10.10.22.02 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 10 Jun 2021 10:22:03 -0700 (PDT) Subject: Re: libstdc++ PR 57272 Fancy pointer support in Hashtable To: Jonathan Wakely Cc: "libstdc++@gcc.gnu.org" , gcc-patches References: <20201020110422.GA926136@redhat.com> <528bdf1d-ed96-5d9d-89db-e1bf57268e24@gmail.com> <20201102141120.GR503596@redhat.com> From: =?UTF-8?Q?Fran=c3=a7ois_Dumont?= Message-ID: Date: Thu, 10 Jun 2021 19:22:01 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1 MIME-Version: 1.0 In-Reply-To: <20201102141120.GR503596@redhat.com> Content-Type: multipart/mixed; boundary="------------F5F59F50A853E79603EBCD6C" Content-Language: fr X-Spam-Status: No, score=-9.2 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FILL_THIS_FORM_LOAN, FREEMAIL_FROM, GIT_PATCH_0, KAM_SHORT, NICE_REPLY_A, RCVD_IN_BARRACUDACENTRAL, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libstdc++@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libstdc++ mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Jun 2021 17:22:11 -0000 This is a multi-part message in MIME format. --------------F5F59F50A853E79603EBCD6C Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit I would like to renew this proposal. I considered all your feedbacks expect: On 02/11/20 3:11 pm, Jonathan Wakely wrote: > > There's no point doing it if you still use raw pointers. > > It either has to be done completely, or it's a waste of time and > energy. > Why ? Can you provide the Standard documentation explaining why the custom pointer must the used everywhere ? For the moment I considered that fancy pointer types are meant to allow access to some special memory area in which a simple raw pointer is not enough to describe an instance location. This is why this patch is making sure that the fancy pointer is stored and returned to the allocator without any loss of information. Otherwise, for internal Hashtable purpose simple raw pointers are still being used. I cannot imagine that any user is expecting to improve container performances with a hand written pointer implementation. For the moment I ignore the comment in the PR about limiting operations done with the pointer (except that I am not using it everywhere of course). I will propose to add move semantic on those pointers if this patch is accepted. François --------------F5F59F50A853E79603EBCD6C Content-Type: text/x-patch; charset=UTF-8; name="hashtable_cust_ptr.patch" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="hashtable_cust_ptr.patch" diff --git a/libstdc++-v3/include/bits/hashtable.h b/libstdc++-v3/include/bits/hashtable.h index 4bdbe7dd9cc..c77cb50c3d9 100644 --- a/libstdc++-v3/include/bits/hashtable.h +++ b/libstdc++-v3/include/bits/hashtable.h @@ -182,8 +182,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION _RehashPolicy, _Traits>, private __detail::_Hashtable_alloc< __alloc_rebind<_Alloc, - __detail::_Hash_node<_Value, - _Traits::__hash_cached::value>>> + __detail::__get_node_type<_Alloc, _Value, + _Traits::__hash_cached::value>>> { static_assert(is_same::type, _Value>::value, "unordered container must have a non-const, non-volatile value_type"); @@ -195,21 +195,21 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION using __traits_type = _Traits; using __hash_cached = typename __traits_type::__hash_cached; using __constant_iterators = typename __traits_type::__constant_iterators; - using __node_type = __detail::_Hash_node<_Value, __hash_cached::value>; + using __node_type = __detail::__get_node_type< + _Alloc, _Value, _Traits::__hash_cached::value>; using __node_alloc_type = __alloc_rebind<_Alloc, __node_type>; - using __hashtable_alloc = __detail::_Hashtable_alloc<__node_alloc_type>; using __node_value_type = - __detail::_Hash_node_value<_Value, __hash_cached::value>; + typename __node_type::__node_value_cache_type; using __node_ptr = typename __hashtable_alloc::__node_ptr; - using __value_alloc_traits = - typename __hashtable_alloc::__value_alloc_traits; using __node_alloc_traits = typename __hashtable_alloc::__node_alloc_traits; - using __node_base = typename __hashtable_alloc::__node_base; - using __node_base_ptr = typename __hashtable_alloc::__node_base_ptr; + using __node_base = typename __node_type::__node_base; + using __value_alloc_traits = + typename __node_alloc_traits::template rebind_traits<_Value>; using __buckets_ptr = typename __hashtable_alloc::__buckets_ptr; + using __buckets_ptr_traits = std::pointer_traits<__buckets_ptr>; using __insert_base = __detail::_Insert<_Key, _Value, _Alloc, _ExtractKey, _Equal, _Hash, @@ -233,15 +233,15 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION using const_iterator = typename __insert_base::const_iterator; - using local_iterator = __detail::_Local_iterator; + using local_iterator = __detail::__local_iterator< + __node_ptr, key_type, value_type, + _ExtractKey, _Hash, _RangeHash, _Unused, + __constant_iterators::value, __hash_cached::value>; - using const_local_iterator = __detail::_Local_const_iterator< - key_type, _Value, - _ExtractKey, _Hash, _RangeHash, _Unused, - __constant_iterators::value, __hash_cached::value>; + using const_local_iterator = __detail::__const_local_iterator< + __node_ptr, key_type, value_type, + _ExtractKey, _Hash, _RangeHash, _Unused, + __constant_iterators::value, __hash_cached::value>; private: using __rehash_type = _RehashPolicy; @@ -376,7 +376,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION #endif private: - __buckets_ptr _M_buckets = &_M_single_bucket; + __buckets_ptr _M_buckets = + __buckets_ptr_traits::pointer_to(_M_single_bucket); size_type _M_bucket_count = 1; __node_base _M_before_begin; size_type _M_element_count = 0; @@ -388,13 +389,13 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION // qualified. // Note that we can't leave hashtable with 0 bucket without adding // numerous checks in the code to avoid 0 modulus. - __node_base_ptr _M_single_bucket = nullptr; + __node_base* _M_single_bucket = nullptr; void _M_update_bbegin() { - if (_M_begin()) - _M_buckets[_M_bucket_index(*_M_begin())] = &_M_before_begin; + if (auto __begin = _M_begin()) + _M_buckets[_M_bucket_index(*__begin)] = &_M_before_begin; } void @@ -406,7 +407,10 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION bool _M_uses_single_bucket(__buckets_ptr __bkts) const - { return __builtin_expect(__bkts == &_M_single_bucket, false); } + { + return __builtin_expect( + std::__to_address(__bkts) == &_M_single_bucket, false); + } bool _M_uses_single_bucket() const @@ -421,7 +425,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION if (__builtin_expect(__bkt_count == 1, false)) { _M_single_bucket = nullptr; - return &_M_single_bucket; + return __buckets_ptr_traits::pointer_to(_M_single_bucket); } return __hashtable_alloc::_M_allocate_buckets(__bkt_count); @@ -440,14 +444,29 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION _M_deallocate_buckets() { _M_deallocate_buckets(_M_buckets, _M_bucket_count); } + static __node_ptr + _S_cast(__node_ptr __n) + { return __n; } + + static __node_ptr + _S_cast(__detail::_Hash_node_base* __n) + { return static_cast<__node_ptr>(__n); } + // Gets bucket begin, deals with the fact that non-empty buckets contain // their before begin node. - __node_ptr + __node_type* _M_bucket_begin(size_type __bkt) const; __node_ptr + _M_pbegin() const + { return _S_cast(_M_before_begin._M_nxt); } + + __node_type* _M_begin() const - { return static_cast<__node_ptr>(_M_before_begin._M_nxt); } + { + return + static_cast<__node_type*>(std::__to_address(_M_before_begin._M_nxt)); + } // Assign *this using another _Hashtable instance. Whether elements // are copied or moved depends on the _Ht reference. @@ -579,7 +598,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION _Hashtable& operator=(initializer_list __l) { - __reuse_or_alloc_node_gen_t __roan(_M_begin(), *this); + __reuse_or_alloc_node_gen_t __roan(_M_pbegin(), *this); _M_before_begin._M_nxt = nullptr; clear(); @@ -779,31 +798,33 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION // Find and insert helper functions and types // Find the node before the one matching the criteria. - __node_base_ptr + __node_base* _M_find_before_node(size_type, const key_type&, __hash_code) const; template - __node_base_ptr + __node_base* _M_find_before_node_tr(size_type, const _Kt&, __hash_code) const; - __node_ptr + __node_type* _M_find_node(size_type __bkt, const key_type& __key, __hash_code __c) const { - __node_base_ptr __before_n = _M_find_before_node(__bkt, __key, __c); + __node_base* __before_n = _M_find_before_node(__bkt, __key, __c); if (__before_n) - return static_cast<__node_ptr>(__before_n->_M_nxt); + return static_cast<__node_type*>( + std::__to_address(__before_n->_M_nxt)); return nullptr; } template - __node_ptr + __node_type* _M_find_node_tr(size_type __bkt, const _Kt& __key, __hash_code __c) const { auto __before_n = _M_find_before_node_tr(__bkt, __key, __c); if (__before_n) - return static_cast<__node_ptr>(__before_n->_M_nxt); + return static_cast<__node_type*>( + std::__to_address(__before_n->_M_nxt)); return nullptr; } @@ -817,8 +838,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION size_type __next_bkt); // Get the node before __n in the bucket __bkt - __node_base_ptr - _M_get_previous_node(size_type __bkt, __node_ptr __n); + __node_base* + _M_get_previous_node(size_type __bkt, __node_type* __n); // Insert node __n with hash code __code, in bucket __bkt if no // rehash (assumes no element with same key already present). @@ -830,7 +851,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION // Insert node __n with key __k and hash code __code. // Takes ownership of __n if insertion succeeds, throws otherwise. iterator - _M_insert_multi_node(__node_ptr __hint, + _M_insert_multi_node(__node_type* __hint, __hash_code __code, __node_ptr __n); template @@ -914,7 +935,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION _M_erase(false_type __uks, const key_type&); iterator - _M_erase(size_type __bkt, __node_base_ptr __prev_n, __node_ptr __n); + _M_erase(size_type __bkt, __node_base* __prev_n, __node_ptr __n); public: // Emplace @@ -974,7 +995,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION const key_type& __k = __nh._M_key(); __hash_code __code = this->_M_hash_code(__k); size_type __bkt = _M_bucket_index(__code); - if (__node_ptr __n = _M_find_node(__bkt, __k, __code)) + if (__node_type* __n = _M_find_node(__bkt, __k, __code)) { __ret.node = std::move(__nh); __ret.position = iterator(__n); @@ -1010,11 +1031,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION private: node_type - _M_extract_node(size_t __bkt, __node_base_ptr __prev_n) + _M_extract_node(size_t __bkt, __node_base* __prev_n) { - __node_ptr __n = static_cast<__node_ptr>(__prev_n->_M_nxt); + __node_ptr __n = _S_cast(__prev_n->_M_nxt); if (__prev_n == _M_buckets[__bkt]) - _M_remove_bucket_begin(__bkt, __n->_M_next(), + _M_remove_bucket_begin(__bkt, __n->_M_next_ptr(), __n->_M_nxt ? _M_bucket_index(*__n->_M_next()) : 0); else if (__n->_M_nxt) { @@ -1046,7 +1067,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION node_type __nh; __hash_code __code = this->_M_hash_code(__k); std::size_t __bkt = _M_bucket_index(__code); - if (__node_base_ptr __prev_node = _M_find_before_node(__bkt, __k, __code)) + if (auto __prev_node = _M_find_before_node(__bkt, __k, __code)) __nh = _M_extract_node(__bkt, __prev_node); return __nh; } @@ -1116,10 +1137,12 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal, _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>:: _M_bucket_begin(size_type __bkt) const - -> __node_ptr + -> __node_type* { - __node_base_ptr __n = _M_buckets[__bkt]; - return __n ? static_cast<__node_ptr>(__n->_M_nxt) : nullptr; + __node_base* __n = _M_buckets[__bkt]; + return __n + ? static_cast<__node_type*>(std::__to_address(__n->_M_nxt)) + : nullptr; } template_M_deallocate_nodes(_M_begin()); + this->_M_deallocate_nodes(_M_pbegin()); _M_before_begin._M_nxt = nullptr; _M_deallocate_buckets(); _M_buckets = nullptr; @@ -1259,15 +1282,15 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION _M_bucket_count = __ht._M_bucket_count; } else - __builtin_memset(_M_buckets, 0, - _M_bucket_count * sizeof(__node_base_ptr)); + __builtin_memset(std::__to_address(_M_buckets), 0, + _M_bucket_count * sizeof(__node_base*)); __try { __hashtable_base::operator=(std::forward<_Ht>(__ht)); _M_element_count = __ht._M_element_count; _M_rehash_policy = __ht._M_rehash_policy; - __reuse_or_alloc_node_gen_t __roan(_M_begin(), *this); + __reuse_or_alloc_node_gen_t __roan(_M_pbegin(), *this); _M_before_begin._M_nxt = nullptr; _M_assign(std::forward<_Ht>(__ht), __roan); if (__former_buckets) @@ -1283,8 +1306,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION _M_buckets = __former_buckets; _M_bucket_count = __former_bucket_count; } - __builtin_memset(_M_buckets, 0, - _M_bucket_count * sizeof(__node_base_ptr)); + __builtin_memset(std::__to_address(_M_buckets), 0, + _M_bucket_count * sizeof(__node_base*)); __throw_exception_again; } } @@ -1310,14 +1333,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION // First deal with the special first node pointed to by // _M_before_begin. - __node_ptr __ht_n = __ht._M_begin(); + __node_type* __ht_n = __ht._M_begin(); __node_ptr __this_n = __node_gen(__fwd_value_for<_Ht>(__ht_n->_M_v())); this->_M_copy_code(*__this_n, *__ht_n); _M_update_bbegin(__this_n); // Then deal with other nodes. - __node_ptr __prev_n = __this_n; + __node_base* __prev_n = std::__to_address(__this_n); for (__ht_n = __ht_n->_M_next(); __ht_n; __ht_n = __ht_n->_M_next()) { __this_n = __node_gen(__fwd_value_for<_Ht>(__ht_n->_M_v())); @@ -1326,7 +1349,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION size_type __bkt = _M_bucket_index(*__this_n); if (!_M_buckets[__bkt]) _M_buckets[__bkt] = __prev_n; - __prev_n = __this_n; + __prev_n = std::__to_address(__this_n); } } __catch(...) @@ -1350,7 +1373,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION _M_rehash_policy._M_reset(); _M_bucket_count = 1; _M_single_bucket = nullptr; - _M_buckets = &_M_single_bucket; + _M_buckets = __buckets_ptr_traits::pointer_to(_M_single_bucket); _M_before_begin._M_nxt = nullptr; _M_element_count = 0; } @@ -1367,7 +1390,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION if (__builtin_expect(std::__addressof(__ht) == this, false)) return; - this->_M_deallocate_nodes(_M_begin()); + this->_M_deallocate_nodes(_M_pbegin()); _M_deallocate_buckets(); __hashtable_base::operator=(std::move(__ht)); _M_rehash_policy = __ht._M_rehash_policy; @@ -1375,7 +1398,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION _M_buckets = __ht._M_buckets; else { - _M_buckets = &_M_single_bucket; + _M_buckets = __buckets_ptr_traits::pointer_to(_M_single_bucket); _M_single_bucket = __ht._M_single_bucket; } @@ -1451,7 +1474,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION // Update buckets if __ht is using its single bucket. if (__ht._M_uses_single_bucket()) { - _M_buckets = &_M_single_bucket; + _M_buckets = __buckets_ptr_traits::pointer_to(_M_single_bucket); _M_single_bucket = __ht._M_single_bucket; } @@ -1502,7 +1525,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION { if (__ht._M_uses_single_bucket()) { - _M_buckets = &_M_single_bucket; + _M_buckets = __buckets_ptr_traits::pointer_to(_M_single_bucket); _M_single_bucket = __ht._M_single_bucket; } else @@ -1510,7 +1533,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION // Fix bucket containing the _M_before_begin pointer that can't be // moved. - _M_update_bbegin(__ht._M_begin()); + _M_update_bbegin(__ht._M_pbegin()); __ht._M_reset(); } @@ -1563,13 +1586,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION if (!__x._M_uses_single_bucket()) { _M_buckets = __x._M_buckets; - __x._M_buckets = &__x._M_single_bucket; + __x._M_buckets = + __buckets_ptr_traits::pointer_to(__x._M_single_bucket); } } else if (__x._M_uses_single_bucket()) { __x._M_buckets = _M_buckets; - _M_buckets = &_M_single_bucket; + _M_buckets = __buckets_ptr_traits::pointer_to(_M_single_bucket); } else std::swap(_M_buckets, __x._M_buckets); @@ -1833,13 +1857,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>:: _M_find_before_node(size_type __bkt, const key_type& __k, __hash_code __code) const - -> __node_base_ptr + -> __node_base* { - __node_base_ptr __prev_p = _M_buckets[__bkt]; + __node_base* __prev_p = _M_buckets[__bkt]; if (!__prev_p) return nullptr; - for (__node_ptr __p = static_cast<__node_ptr>(__prev_p->_M_nxt);; + for (__node_type* __p = + static_cast<__node_type*>(std::__to_address(__prev_p->_M_nxt));; __p = __p->_M_next()) { if (this->_M_equals(__k, __code, *__p)) @@ -1863,13 +1888,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>:: _M_find_before_node_tr(size_type __bkt, const _Kt& __k, __hash_code __code) const - -> __node_base_ptr + -> __node_base* { - __node_base_ptr __prev_p = _M_buckets[__bkt]; + __node_base* __prev_p = _M_buckets[__bkt]; if (!__prev_p) return nullptr; - for (__node_ptr __p = static_cast<__node_ptr>(__prev_p->_M_nxt);; + for (__node_type* __p = + static_cast<__node_type*>(std::__to_address(__prev_p->_M_nxt));; __p = __p->_M_next()) { if (this->_M_equals_tr(__k, __code, *__p)) @@ -1910,7 +1936,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION if (__node->_M_nxt) // We must update former begin bucket that is pointing to // _M_before_begin. - _M_buckets[_M_bucket_index(*__node->_M_next())] = __node; + _M_buckets[_M_bucket_index(*__node->_M_next())] = + std::__to_address(__node); _M_buckets[__bkt] = &_M_before_begin; } @@ -1947,12 +1974,12 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION auto _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal, _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>:: - _M_get_previous_node(size_type __bkt, __node_ptr __n) - -> __node_base_ptr + _M_get_previous_node(size_type __bkt, __node_type* __n) + -> __node_base* { - __node_base_ptr __prev_n = _M_buckets[__bkt]; - while (__prev_n->_M_nxt != __n) - __prev_n = __prev_n->_M_nxt; + __node_base* __prev_n = _M_buckets[__bkt]; + while (std::__to_address(__prev_n->_M_nxt) != __n) + __prev_n = std::__to_address(__prev_n->_M_nxt); return __prev_n; } @@ -1972,7 +1999,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION const key_type& __k = _ExtractKey{}(__node._M_node->_M_v()); __hash_code __code = this->_M_hash_code(__k); size_type __bkt = _M_bucket_index(__code); - if (__node_ptr __p = _M_find_node(__bkt, __k, __code)) + if (__node_type* __p = _M_find_node(__bkt, __k, __code)) // There is already an equivalent node, no insertion return std::make_pair(iterator(__p), false); @@ -2032,7 +2059,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION // Always insert at the beginning of the bucket. _M_insert_bucket_begin(__bkt, __node); ++_M_element_count; - return iterator(__node); + return iterator(std::__to_address(__node)); } template:: - _M_insert_multi_node(__node_ptr __hint, + _M_insert_multi_node(__node_type* __hint, __hash_code __code, __node_ptr __node) -> iterator { @@ -2059,7 +2086,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION // Find the node before an equivalent one or use hint if it exists and // if it is equivalent. - __node_base_ptr __prev + __node_base* __prev = __builtin_expect(__hint != nullptr, false) && this->_M_equals(__k, __code, *__hint) ? __hint @@ -2078,7 +2105,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION { size_type __next_bkt = _M_bucket_index(*__node->_M_next()); if (__next_bkt != __bkt) - _M_buckets[__next_bkt] = __node; + _M_buckets[__next_bkt] = std::__to_address(__node); } } else @@ -2087,7 +2114,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION // equivalent elements' relative positions. _M_insert_bucket_begin(__bkt, __node); ++_M_element_count; - return iterator(__node); + return iterator(std::__to_address(__node)); } // Insert v if no element with its key is already present. @@ -2106,8 +2133,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION __hash_code __code = this->_M_hash_code_tr(__k); size_type __bkt = _M_bucket_index(__code); - if (__node_ptr __node = _M_find_node_tr(__bkt, __k, __code)) - return { iterator(__node), false }; + if (__node_type* __n = _M_find_node_tr(__bkt, __k, __code)) + return { iterator(__n), false }; _Scoped_node __node { __node_builder_t::_S_build(std::forward<_Kt>(__k), @@ -2158,14 +2185,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION erase(const_iterator __it) -> iterator { - __node_ptr __n = __it._M_cur; + __node_type* __n = __it._M_cur; std::size_t __bkt = _M_bucket_index(*__n); // Look for previous node to unlink it from the erased one, this // is why we need buckets to contain the before begin to make // this search fast. - __node_base_ptr __prev_n = _M_get_previous_node(__bkt, __n); - return _M_erase(__bkt, __prev_n, __n); + __node_base* __prev_n = _M_get_previous_node(__bkt, __n); + return _M_erase(__bkt, __prev_n, _S_cast(__prev_n->_M_nxt)); } template:: - _M_erase(size_type __bkt, __node_base_ptr __prev_n, __node_ptr __n) + _M_erase(size_type __bkt, __node_base* __prev_n, __node_ptr __n) -> iterator { if (__prev_n == _M_buckets[__bkt]) - _M_remove_bucket_begin(__bkt, __n->_M_next(), + _M_remove_bucket_begin(__bkt, __n->_M_next_ptr(), __n->_M_nxt ? _M_bucket_index(*__n->_M_next()) : 0); else if (__n->_M_nxt) { @@ -2210,12 +2237,12 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION std::size_t __bkt = _M_bucket_index(__code); // Look for the node before the first matching node. - __node_base_ptr __prev_n = _M_find_before_node(__bkt, __k, __code); + __node_base* __prev_n = _M_find_before_node(__bkt, __k, __code); if (!__prev_n) return 0; // We found a matching node, erase it. - __node_ptr __n = static_cast<__node_ptr>(__prev_n->_M_nxt); + __node_ptr __n = _S_cast(__prev_n->_M_nxt); _M_erase(__bkt, __prev_n, __n); return 1; } @@ -2234,7 +2261,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION std::size_t __bkt = _M_bucket_index(__code); // Look for the node before the first matching node. - __node_base_ptr __prev_n = _M_find_before_node(__bkt, __k, __code); + __node_base* __prev_n = _M_find_before_node(__bkt, __k, __code); if (!__prev_n) return 0; @@ -2244,8 +2271,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION // We use one loop to find all matching nodes and another to deallocate // them so that the key stays valid during the first loop. It might be // invalidated indirectly when destroying nodes. - __node_ptr __n = static_cast<__node_ptr>(__prev_n->_M_nxt); - __node_ptr __n_last = __n->_M_next(); + __node_ptr __n = _S_cast(__prev_n->_M_nxt); + __node_type* __n_last = __n->_M_next(); while (__n_last && this->_M_node_equals(*__n, *__n_last)) __n_last = __n_last->_M_next(); @@ -2255,19 +2282,19 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION size_type __result = 0; do { - __node_ptr __p = __n->_M_next(); + __node_ptr __p = __n->_M_next_ptr(); this->_M_deallocate_node(__n); __n = __p; ++__result; } - while (__n != __n_last); + while (std::__to_address(__n) != __n_last); _M_element_count -= __result; if (__prev_n == _M_buckets[__bkt]) - _M_remove_bucket_begin(__bkt, __n_last, __n_last_bkt); + _M_remove_bucket_begin(__bkt, __n, __n_last_bkt); else if (__n_last_bkt != __bkt) _M_buckets[__n_last_bkt] = __prev_n; - __prev_n->_M_nxt = __n_last; + __prev_n->_M_nxt = __n; return __result; } @@ -2281,41 +2308,42 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION erase(const_iterator __first, const_iterator __last) -> iterator { - __node_ptr __n = __first._M_cur; - __node_ptr __last_n = __last._M_cur; + __node_type* __n = __first._M_cur; + __node_type* __last_n = __last._M_cur; if (__n == __last_n) return iterator(__n); std::size_t __bkt = _M_bucket_index(*__n); - __node_base_ptr __prev_n = _M_get_previous_node(__bkt, __n); + __node_base* __prev_n = _M_get_previous_node(__bkt, __n); + __node_ptr __nptr = _S_cast(__prev_n->_M_nxt); bool __is_bucket_begin = __n == _M_bucket_begin(__bkt); std::size_t __n_bkt = __bkt; for (;;) { do { - __node_ptr __tmp = __n; - __n = __n->_M_next(); + __node_ptr __tmp = __nptr; + __nptr = __nptr->_M_next_ptr(); this->_M_deallocate_node(__tmp); --_M_element_count; - if (!__n) + if (!__nptr) break; - __n_bkt = _M_bucket_index(*__n); + __n_bkt = _M_bucket_index(*__nptr); } - while (__n != __last_n && __n_bkt == __bkt); + while (std::__to_address(__nptr) != __last_n && __n_bkt == __bkt); if (__is_bucket_begin) - _M_remove_bucket_begin(__bkt, __n, __n_bkt); - if (__n == __last_n) + _M_remove_bucket_begin(__bkt, __nptr, __n_bkt); + if (std::__to_address(__nptr) == __last_n) break; __is_bucket_begin = true; __bkt = __n_bkt; } - if (__n && (__n_bkt != __bkt || __is_bucket_begin)) + if (__nptr && (__n_bkt != __bkt || __is_bucket_begin)) _M_buckets[__n_bkt] = __prev_n; - __prev_n->_M_nxt = __n; - return iterator(__n); + __prev_n->_M_nxt = __nptr; + return iterator(std::__to_address(__nptr)); } template:: clear() noexcept { - this->_M_deallocate_nodes(_M_begin()); - __builtin_memset(_M_buckets, 0, - _M_bucket_count * sizeof(__node_base_ptr)); + this->_M_deallocate_nodes(_M_pbegin()); + __builtin_memset(std::__to_address(_M_buckets), 0, + _M_bucket_count * sizeof(__node_base*)); _M_element_count = 0; _M_before_begin._M_nxt = nullptr; } @@ -2390,12 +2418,12 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION _M_rehash_aux(size_type __bkt_count, true_type /* __uks */) { __buckets_ptr __new_buckets = _M_allocate_buckets(__bkt_count); - __node_ptr __p = _M_begin(); + __node_type* __p = _M_begin(); _M_before_begin._M_nxt = nullptr; std::size_t __bbegin_bkt = 0; while (__p) { - __node_ptr __next = __p->_M_next(); + __node_type* __next = __p->_M_next(); std::size_t __bkt = __hash_code_base::_M_bucket_index(*__p, __bkt_count); if (!__new_buckets[__bkt]) @@ -2433,16 +2461,16 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION _M_rehash_aux(size_type __bkt_count, false_type /* __uks */) { __buckets_ptr __new_buckets = _M_allocate_buckets(__bkt_count); - __node_ptr __p = _M_begin(); + __node_type* __p = _M_begin(); _M_before_begin._M_nxt = nullptr; std::size_t __bbegin_bkt = 0; std::size_t __prev_bkt = 0; - __node_ptr __prev_p = nullptr; + __node_type* __prev_p = nullptr; bool __check_bucket = false; while (__p) { - __node_ptr __next = __p->_M_next(); + __node_type* __next = __p->_M_next(); std::size_t __bkt = __hash_code_base::_M_bucket_index(*__p, __bkt_count); diff --git a/libstdc++-v3/include/bits/hashtable_policy.h b/libstdc++-v3/include/bits/hashtable_policy.h index 2130c958262..495d5ad5ae9 100644 --- a/libstdc++-v3/include/bits/hashtable_policy.h +++ b/libstdc++-v3/include/bits/hashtable_policy.h @@ -59,24 +59,29 @@ namespace __detail // Helper function: return distance(first, last) for forward // iterators, or 0/1 for input iterators. - template + template inline typename std::iterator_traits<_Iterator>::difference_type __distance_fw(_Iterator __first, _Iterator __last, std::input_iterator_tag) { return __first != __last ? 1 : 0; } - template + template inline typename std::iterator_traits<_Iterator>::difference_type __distance_fw(_Iterator __first, _Iterator __last, std::forward_iterator_tag) { return std::distance(__first, __last); } - template + template inline typename std::iterator_traits<_Iterator>::difference_type __distance_fw(_Iterator __first, _Iterator __last) { return __distance_fw(__first, __last, std::__iterator_category(__first)); } + template + using __alloc_val_ptr = + typename std::allocator_traits<__alloc_rebind<_Alloc, + _Value>>::pointer; + struct _Identity { template @@ -146,10 +151,10 @@ namespace __detail using __hashtable_alloc = _Hashtable_alloc<__node_alloc_type>; using __node_alloc_traits = typename __hashtable_alloc::__node_alloc_traits; - using __node_type = typename __hashtable_alloc::__node_type; + using __node_ptr = typename __hashtable_alloc::__node_ptr; public: - _ReuseOrAllocNode(__node_type* __nodes, __hashtable_alloc& __h) + _ReuseOrAllocNode(__node_ptr __nodes, __hashtable_alloc& __h) : _M_nodes(__nodes), _M_h(__h) { } _ReuseOrAllocNode(const _ReuseOrAllocNode&) = delete; @@ -157,13 +162,13 @@ namespace __detail { _M_h._M_deallocate_nodes(_M_nodes); } template - __node_type* + __node_ptr operator()(_Args&&... __args) const { if (_M_nodes) { - __node_type* __node = _M_nodes; - _M_nodes = _M_nodes->_M_next(); + __node_ptr __node = _M_nodes; + _M_nodes = _M_nodes->_M_next_ptr(); __node->_M_nxt = nullptr; auto& __a = _M_h._M_node_allocator(); __node_alloc_traits::destroy(__a, __node->_M_valptr()); @@ -183,7 +188,7 @@ namespace __detail } private: - mutable __node_type* _M_nodes; + mutable __node_ptr _M_nodes; __hashtable_alloc& _M_h; }; @@ -194,14 +199,14 @@ namespace __detail { private: using __hashtable_alloc = _Hashtable_alloc<_NodeAlloc>; - using __node_type = typename __hashtable_alloc::__node_type; + using __node_ptr = typename __hashtable_alloc::__node_ptr; public: _AllocNode(__hashtable_alloc& __h) : _M_h(__h) { } template - __node_type* + __node_ptr operator()(_Args&&... __args) const { return _M_h._M_allocate_node(std::forward<_Args>(__args)...); } @@ -259,6 +264,28 @@ namespace __detail _Hash_node_base(_Hash_node_base* __next) noexcept : _M_nxt(__next) { } }; + /** + * struct _Hash_pnode_base + * + * Like _Hash_node_base but used in case of custom pointer type defined by the + * allocator. + */ + template + struct _Hash_pnode_base + { + using __node_ptr = _NodePtr; + + __node_ptr _M_nxt; + + _Hash_pnode_base() + noexcept(std::is_nothrow_default_constructible<__node_ptr>::value) + : _M_nxt() { } + + _Hash_pnode_base(__node_ptr __next) + noexcept(std::is_nothrow_copy_constructible<__node_ptr>::value) + : _M_nxt(__next) { } + }; + /** * struct _Hash_node_value_base * @@ -290,18 +317,23 @@ namespace __detail /** * Primary template struct _Hash_node_code_cache. + * + * No cache. */ template struct _Hash_node_code_cache { }; /** - * Specialization for node with cache, struct _Hash_node_code_cache. + * Specialization for node with cache. */ template<> struct _Hash_node_code_cache { std::size_t _M_hash_code; }; + /** + * Node with value and optionally a cache for the hash code. + */ template struct _Hash_node_value : _Hash_node_value_base<_Value> @@ -309,28 +341,79 @@ namespace __detail { }; /** - * Primary template struct _Hash_node. + * struct _Hash_node. + * + * The node definition when the allocator is using raw pointers. */ template struct _Hash_node : _Hash_node_base , _Hash_node_value<_Value, _Cache_hash_code> { + using __node_base = _Hash_node_base; + using __node_ptr = _Hash_node*; + using __node_type = _Hash_node; + using __node_value_cache_type = + _Hash_node_value<_Value, _Cache_hash_code>; + _Hash_node* _M_next() const noexcept { return static_cast<_Hash_node*>(this->_M_nxt); } + + __node_ptr + _M_next_ptr() const noexcept + { return _M_next(); } }; + /** + * struct _Hash_pnode. + * + * The node definition used when the allocator define a custom pointer type. + */ + template + struct _Hash_pnode + : _Hash_pnode_base<__ptr_rebind<_Ptr, + _Hash_pnode<_Ptr, _Cache_hash_code>>> + , _Hash_node_value::element_type, + _Cache_hash_code> + { + using __node_base = + _Hash_pnode_base<__ptr_rebind<_Ptr, + _Hash_pnode<_Ptr, _Cache_hash_code>>>; + using __node_ptr = typename __node_base::__node_ptr; + using __node_type = + typename std::pointer_traits<__node_ptr>::element_type; + using value_type = typename __node_type::value_type; + using __node_value_cache_type = + _Hash_node_value; + typedef typename std::pointer_traits<__node_ptr>::difference_type + difference_type; + + __node_type* + _M_next() const noexcept + { return std::__to_address(this->_M_nxt); } + + __node_ptr + _M_next_ptr() const noexcept + { return this->_M_nxt; } + }; + + template + using __get_node_type = typename std::conditional< + std::is_pointer<__alloc_val_ptr<_Alloc, _Value>>::value, + _Hash_node<_Value, __hash_cached>, + _Hash_pnode<__alloc_val_ptr<_Alloc, _Value>, __hash_cached>>::type; + /// Base class for node iterators. - template - struct _Node_iterator_base + template + struct _Hashtable_iterator_base { - using __node_type = _Hash_node<_Value, _Cache_hash_code>; + using __node_type = _NodeType; __node_type* _M_cur; - _Node_iterator_base() : _M_cur(nullptr) { } - _Node_iterator_base(__node_type* __p) noexcept + _Hashtable_iterator_base() : _M_cur(nullptr) { } + _Hashtable_iterator_base(__node_type* __p) noexcept : _M_cur(__p) { } void @@ -338,18 +421,32 @@ namespace __detail { _M_cur = _M_cur->_M_next(); } friend bool - operator==(const _Node_iterator_base& __x, const _Node_iterator_base& __y) - noexcept + operator==(const _Hashtable_iterator_base& __x, + const _Hashtable_iterator_base& __y) noexcept { return __x._M_cur == __y._M_cur; } #if __cpp_impl_three_way_comparison < 201907L friend bool - operator!=(const _Node_iterator_base& __x, const _Node_iterator_base& __y) - noexcept + operator!=(const _Hashtable_iterator_base& __x, + const _Hashtable_iterator_base& __y) noexcept { return __x._M_cur != __y._M_cur; } #endif }; + /// Base class for node iterators. + template + struct _Node_iterator_base + : _Hashtable_iterator_base<_Hash_node<_Value, _Cache_hash_code>> + { + using __base_type = + _Hashtable_iterator_base<_Hash_node<_Value, _Cache_hash_code>>; + using __node_type = typename __base_type::__node_type; + + _Node_iterator_base() = default; + _Node_iterator_base(__node_type* __p) noexcept + : __base_type(__p) { } + }; + /// Node iterators, used to iterate through all the hashtable. template struct _Node_iterator @@ -451,6 +548,110 @@ namespace __detail } }; + /// Node iterators, used to iterate through all the hashtable. + template + struct _Hashtable_iterator + : public _Hashtable_iterator_base<_NodeType> + { + private: + using __base_type = _Hashtable_iterator_base<_NodeType>; + using __node_type = typename __base_type::__node_type; + + public: + typedef typename __node_type::value_type value_type; + typedef typename __node_type::difference_type difference_type; + typedef std::forward_iterator_tag iterator_category; + + using pointer = typename std::conditional<__constant_iterators, + const value_type*, value_type*>::type; + + using reference = typename std::conditional<__constant_iterators, + const value_type&, value_type&>::type; + + _Hashtable_iterator() noexcept + : __base_type(nullptr) { } + + explicit + _Hashtable_iterator(__node_type* __p) noexcept + : __base_type(__p) { } + + reference + operator*() const noexcept + { return this->_M_cur->_M_v(); } + + pointer + operator->() const noexcept + { return this->_M_cur->_M_valptr(); } + + _Hashtable_iterator& + operator++() noexcept + { + this->_M_incr(); + return *this; + } + + _Hashtable_iterator + operator++(int) noexcept + { + _Hashtable_iterator __tmp(*this); + this->_M_incr(); + return __tmp; + } + }; + + /// Node const_iterators, used to iterate through all the hashtable. + template + struct _Hashtable_const_iterator + : public _Hashtable_iterator_base<_NodeType> + { + private: + using __base_type = _Hashtable_iterator_base<_NodeType>; + using __node_type = typename __base_type::__node_type; + + public: + typedef typename __node_type::value_type value_type; + typedef typename __node_type::difference_type difference_type; + typedef std::forward_iterator_tag iterator_category; + + typedef const value_type* pointer; + typedef const value_type& reference; + + _Hashtable_const_iterator() noexcept + : __base_type(nullptr) { } + + explicit + _Hashtable_const_iterator(__node_type* __p) noexcept + : __base_type(__p) { } + + _Hashtable_const_iterator( + const _Hashtable_iterator<_NodeType, + __constant_iterators>& __x) noexcept + : __base_type(__x._M_cur) { } + + reference + operator*() const noexcept + { return this->_M_cur->_M_v(); } + + pointer + operator->() const noexcept + { return this->_M_cur->_M_valptr(); } + + _Hashtable_const_iterator& + operator++() noexcept + { + this->_M_incr(); + return *this; + } + + _Hashtable_const_iterator + operator++(int) noexcept + { + _Hashtable_const_iterator __tmp(*this); + this->_M_incr(); + return __tmp; + } + }; + // Many of class template _Hashtable's template parameters are policy // classes. These are defaults for the policies. @@ -837,16 +1038,15 @@ namespace __detail using __hash_cached = typename _Traits::__hash_cached; using __constant_iterators = typename _Traits::__constant_iterators; + using __alloc_ptr = __alloc_val_ptr<_Alloc, _Value>; - using __hashtable_alloc = _Hashtable_alloc< - __alloc_rebind<_Alloc, _Hash_node<_Value, - __hash_cached::value>>>; + using __node_type = __get_node_type<_Alloc, _Value, __hash_cached::value>; + using __node_alloc_type = __alloc_rebind<_Alloc, __node_type>; using value_type = typename __hashtable_base::value_type; using size_type = typename __hashtable_base::size_type; using __unique_keys = typename _Traits::__unique_keys; - using __node_alloc_type = typename __hashtable_alloc::__node_alloc_type; using __node_gen_type = _AllocNode<__node_alloc_type>; __hashtable& @@ -864,11 +1064,19 @@ namespace __detail const _NodeGetter&, false_type __uks); public: - using iterator = _Node_iterator<_Value, __constant_iterators::value, - __hash_cached::value>; - - using const_iterator = _Node_const_iterator<_Value, __constant_iterators::value, - __hash_cached::value>; + using iterator = + typename std::conditional::value, + _Node_iterator<_Value, + __constant_iterators::value, __hash_cached::value>, + _Hashtable_iterator<__node_type, + __constant_iterators::value>>::type; + + using const_iterator = + typename std::conditional::value, + _Node_const_iterator<_Value, + __constant_iterators::value, __hash_cached::value>, + _Hashtable_const_iterator<__node_type, + __constant_iterators::value>>::type; using __ireturn_type = typename std::conditional<__unique_keys::value, std::pair, @@ -1202,6 +1410,17 @@ namespace __detail bool __cache_hash_code> struct _Local_iterator_base; + /** + * Primary class template _Hashtable_local_iter_base. + * + * Base class for local iterators, used to iterate within a bucket + * but not between buckets. + */ + template + struct _Hashtable_local_iter_base; + /** * Primary class template _Hash_code_base. * @@ -1354,6 +1573,47 @@ namespace __detail _M_get_bucket() const { return _M_bucket; } // for debug mode }; + /// Partial specialization used when nodes contain a cached hash code. + template + struct _Hashtable_local_iter_base<_Key, _NodeType, _ExtractKey, + _Hash, _RangeHash, _Unused, true> + : public _Hashtable_iterator_base<_NodeType> + { + protected: + using __base_node_iter = _Hashtable_iterator_base<_NodeType>; + using value_type = typename _NodeType::value_type; + using __hash_code_base = _Hash_code_base<_Key, value_type, _ExtractKey, + _Hash, _RangeHash, _Unused, true>; + + _Hashtable_local_iter_base() = default; + _Hashtable_local_iter_base(const __hash_code_base&, + _NodeType* __p, + std::size_t __bkt, std::size_t __bkt_count) + : __base_node_iter(__p), _M_bucket(__bkt), _M_bucket_count(__bkt_count) + { } + + void + _M_incr() + { + __base_node_iter::_M_incr(); + if (this->_M_cur) + { + std::size_t __bkt + = _RangeHash{}(this->_M_cur->_M_hash_code, _M_bucket_count); + if (__bkt != _M_bucket) + this->_M_cur = nullptr; + } + } + + std::size_t _M_bucket; + std::size_t _M_bucket_count; + + public: + std::size_t + _M_get_bucket() const { return _M_bucket; } // for debug mode + }; + // Uninitialized storage for a _Hash_code_base. // This type is DefaultConstructible and Assignable even if the // _Hash_code_base type isn't, so that _Local_iterator_base<..., false> @@ -1468,6 +1728,84 @@ namespace __detail _M_get_bucket() const { return _M_bucket; } // for debug mode }; + // Partial specialization used when hash codes are not cached + template + struct _Hashtable_local_iter_base<_Key, _NodeType, _ExtractKey, + _Hash, _RangeHash, _Unused, false> + : _Hash_code_storage<_Hash> + , _Hashtable_iterator_base<_NodeType> + { + protected: + using value_type = typename _NodeType::value_type; + using __hash_code_base = _Hash_code_base<_Key, value_type, _ExtractKey, + _Hash, _RangeHash, _Unused, false>; + using __node_iter_base = _Hashtable_iterator_base<_NodeType>; + + _Hashtable_local_iter_base() : _M_bucket_count(-1) { } + + _Hashtable_local_iter_base(const __hash_code_base& __base, + _NodeType* __p, + std::size_t __bkt, std::size_t __bkt_count) + : __node_iter_base(__p), _M_bucket(__bkt), _M_bucket_count(__bkt_count) + { _M_init(__base.hash_function()); } + + ~_Hashtable_local_iter_base() + { + if (_M_bucket_count != -1) + _M_destroy(); + } + + _Hashtable_local_iter_base(const _Hashtable_local_iter_base& __iter) + : __node_iter_base(__iter._M_cur), _M_bucket(__iter._M_bucket) + , _M_bucket_count(__iter._M_bucket_count) + { + if (_M_bucket_count != -1) + _M_init(*__iter._M_h()); + } + + _Hashtable_local_iter_base& + operator=(const _Hashtable_local_iter_base& __iter) + { + if (_M_bucket_count != -1) + _M_destroy(); + this->_M_cur = __iter._M_cur; + _M_bucket = __iter._M_bucket; + _M_bucket_count = __iter._M_bucket_count; + if (_M_bucket_count != -1) + _M_init(*__iter._M_h()); + return *this; + } + + void + _M_incr() + { + __node_iter_base::_M_incr(); + if (this->_M_cur) + { + std::size_t __bkt = + _RangeHash{}((*this->_M_h())(_ExtractKey{}(this->_M_cur->_M_v())), + _M_bucket_count); + if (__bkt != _M_bucket) + this->_M_cur = nullptr; + } + } + + std::size_t _M_bucket; + std::size_t _M_bucket_count; + + void + _M_init(const _Hash& __h) + { ::new(this->_M_h()) _Hash(__h); } + + void + _M_destroy() { this->_M_h()->~_Hash(); } + + public: + std::size_t + _M_get_bucket() const { return _M_bucket; } // for debug mode + }; + /// local iterators template + struct _Hashtable_local_iterator + : public _Hashtable_local_iter_base<_Key, _NodeType, _ExtractKey, + _Hash, _RangeHash, _Unused, __cache> + { + private: + using __base_type = + _Hashtable_local_iter_base<_Key, _NodeType, _ExtractKey, + _Hash, _RangeHash, _Unused, __cache>; + using __hash_code_base = typename __base_type::__hash_code_base; + using __node_type = typename __base_type::__node_type; + + public: + typedef typename __base_type::value_type value_type; + typedef typename std::conditional<__constant_iterators, + const value_type*, value_type*>::type + pointer; + typedef typename std::conditional<__constant_iterators, + const value_type&, value_type&>::type + reference; + typedef typename __node_type::difference_type difference_type; + typedef std::forward_iterator_tag iterator_category; + + _Hashtable_local_iterator() = default; + + _Hashtable_local_iterator(const __hash_code_base& __base, + __node_type* __n, + std::size_t __bkt, std::size_t __bkt_count) + : __base_type(__base, __n, __bkt, __bkt_count) + { } + + reference + operator*() const + { return this->_M_cur->_M_v(); } + + pointer + operator->() const + { return this->_M_cur->_M_valptr(); } + + _Hashtable_local_iterator& + operator++() + { + this->_M_incr(); + return *this; + } + + _Hashtable_local_iterator + operator++(int) + { + _Hashtable_local_iterator __tmp(*this); + this->_M_incr(); + return __tmp; + } + }; + + /// local const_iterators + template + struct _Hashtable_const_local_iterator + : public _Hashtable_local_iter_base<_Key, _NodeType, _ExtractKey, + _Hash, _RangeHash, _Unused, __cache> + { + private: + using __base_type = + _Hashtable_local_iter_base<_Key, _NodeType, _ExtractKey, + _Hash, _RangeHash, _Unused, __cache>; + using __hash_code_base = typename __base_type::__hash_code_base; + using __node_type = typename __base_type::__node_type; + + public: + typedef typename __base_type::value_type value_type; + typedef const value_type* pointer; + typedef const value_type& reference; + typedef typename __node_type::difference_type difference_type; + typedef std::forward_iterator_tag iterator_category; + + _Hashtable_const_local_iterator() = default; + + _Hashtable_const_local_iterator(const __hash_code_base& __base, + __node_type* __n, + std::size_t __bkt, std::size_t __bkt_count) + : __base_type(__base, __n, __bkt, __bkt_count) + { } + + _Hashtable_const_local_iterator(const _Hashtable_local_iterator< + _Key, _NodeType, _ExtractKey, + _Hash, _RangeHash, _Unused, + __constant_iterators, + __cache>& __x) + : __base_type(__x) + { } + + reference + operator*() const + { return this->_M_cur->_M_v(); } + + pointer + operator->() const + { return this->_M_cur->_M_valptr(); } + + _Hashtable_const_local_iterator& + operator++() + { + this->_M_incr(); + return *this; + } + + _Hashtable_const_local_iterator + operator++(int) + { + _Hashtable_const_local_iterator __tmp(*this); + this->_M_incr(); + return __tmp; + } + }; + + template + using __local_iterator = typename std::conditional< + std::is_pointer<_NodePtr>::value, + _Local_iterator<_Key, _Value, + _ExtractKey, _Hash, _RangeHash, _Unused, + __constant_iterators, __hash_cached>, + _Hashtable_local_iterator<_Key, + typename std::pointer_traits<_NodePtr>::element_type, + _ExtractKey, _Hash, _RangeHash, _Unused, + __constant_iterators, + __hash_cached>>::type; + + template + using __const_local_iterator = typename std::conditional< + std::is_pointer<_NodePtr>::value, + _Local_const_iterator<_Key, _Value, + _ExtractKey, _Hash, _RangeHash, _Unused, + __constant_iterators, __hash_cached>, + _Hashtable_const_local_iterator<_Key, + typename std::pointer_traits<_NodePtr>::element_type, + _ExtractKey, _Hash, _RangeHash, _Unused, + __constant_iterators, + __hash_cached>>::type; + /** * Primary class template _Hashtable_base. * @@ -1740,8 +2228,9 @@ namespace __detail if (!__prev_n) return false; - for (__node_type* __n = static_cast<__node_type*>(__prev_n->_M_nxt);; - __n = __n->_M_next()) + __node_type* __n = + static_cast<__node_type*>(std::__to_address(__prev_n->_M_nxt)); + for (;; __n = __n->_M_next()) { if (__n->_M_v() == *__itx) break; @@ -1800,7 +2289,8 @@ namespace __detail if (!__y_prev_n) return false; - __node_type* __y_n = static_cast<__node_type*>(__y_prev_n->_M_nxt); + __node_type* __y_n = + static_cast<__node_type*>(std::__to_address(__y_prev_n->_M_nxt)); for (;;) { if (__this->key_eq()(_ExtractKey{}(__y_n->_M_v()), @@ -1847,16 +2337,12 @@ namespace __detail // Use __gnu_cxx to benefit from _S_always_equal and al. using __node_alloc_traits = __gnu_cxx::__alloc_traits<__node_alloc_type>; - using __value_alloc_traits = typename __node_alloc_traits::template - rebind_traits; - - using __node_ptr = __node_type*; - using __node_base = _Hash_node_base; - using __node_base_ptr = __node_base*; + using __node_ptr = typename __node_alloc_traits::pointer; + using __node_base = typename __node_type::__node_base; using __buckets_alloc_type = - __alloc_rebind<__node_alloc_type, __node_base_ptr>; + __alloc_rebind<__node_alloc_type, __node_base*>; using __buckets_alloc_traits = std::allocator_traits<__buckets_alloc_type>; - using __buckets_ptr = __node_base_ptr*; + using __buckets_ptr = typename __buckets_alloc_traits::pointer; _Hashtable_alloc() = default; _Hashtable_alloc(const _Hashtable_alloc&) = default; @@ -1909,14 +2395,13 @@ namespace __detail -> __node_ptr { auto __nptr = __node_alloc_traits::allocate(_M_node_allocator(), 1); - __node_ptr __n = std::__to_address(__nptr); __try { - ::new ((void*)__n) __node_type; + ::new ((void*)std::__to_address(__nptr)) __node_type; __node_alloc_traits::construct(_M_node_allocator(), - __n->_M_valptr(), + __nptr->_M_valptr(), std::forward<_Args>(__args)...); - return __n; + return __nptr; } __catch(...) { @@ -1927,20 +2412,18 @@ namespace __detail template void - _Hashtable_alloc<_NodeAlloc>::_M_deallocate_node(__node_ptr __n) + _Hashtable_alloc<_NodeAlloc>::_M_deallocate_node(__node_ptr __nptr) { - __node_alloc_traits::destroy(_M_node_allocator(), __n->_M_valptr()); - _M_deallocate_node_ptr(__n); + __node_alloc_traits::destroy(_M_node_allocator(), __nptr->_M_valptr()); + _M_deallocate_node_ptr(__nptr); } template void - _Hashtable_alloc<_NodeAlloc>::_M_deallocate_node_ptr(__node_ptr __n) + _Hashtable_alloc<_NodeAlloc>::_M_deallocate_node_ptr(__node_ptr __nptr) { - typedef typename __node_alloc_traits::pointer _Ptr; - auto __ptr = std::pointer_traits<_Ptr>::pointer_to(*__n); - __n->~__node_type(); - __node_alloc_traits::deallocate(_M_node_allocator(), __ptr, 1); + __nptr->~__node_type(); + __node_alloc_traits::deallocate(_M_node_allocator(), __nptr, 1); } template @@ -1950,7 +2433,7 @@ namespace __detail while (__n) { __node_ptr __tmp = __n; - __n = __n->_M_next(); + __n = __n->_M_next_ptr(); _M_deallocate_node(__tmp); } } @@ -1963,9 +2446,9 @@ namespace __detail __buckets_alloc_type __alloc(_M_node_allocator()); auto __ptr = __buckets_alloc_traits::allocate(__alloc, __bkt_count); - __buckets_ptr __p = std::__to_address(__ptr); - __builtin_memset(__p, 0, __bkt_count * sizeof(__node_base_ptr)); - return __p; + __builtin_memset(std::__to_address(__ptr), 0, + __bkt_count * sizeof(__node_base*)); + return __ptr; } template @@ -1974,10 +2457,8 @@ namespace __detail _M_deallocate_buckets(__buckets_ptr __bkts, std::size_t __bkt_count) { - typedef typename __buckets_alloc_traits::pointer _Ptr; - auto __ptr = std::pointer_traits<_Ptr>::pointer_to(*__bkts); __buckets_alloc_type __alloc(_M_node_allocator()); - __buckets_alloc_traits::deallocate(__alloc, __ptr, __bkt_count); + __buckets_alloc_traits::deallocate(__alloc, __bkts, __bkt_count); } ///@} hashtable-detail diff --git a/libstdc++-v3/testsuite/23_containers/unordered_map/allocator/ext_ptr.cc b/libstdc++-v3/testsuite/23_containers/unordered_map/allocator/ext_ptr.cc new file mode 100644 index 00000000000..5e9ff548032 --- /dev/null +++ b/libstdc++-v3/testsuite/23_containers/unordered_map/allocator/ext_ptr.cc @@ -0,0 +1,57 @@ +// Copyright (C) 2021 Free Software Foundation, Inc. +// +// This file is part of the GNU ISO C++ Library. This library is free +// software; you can redistribute it and/or modify it under the +// terms of the GNU General Public License as published by the +// Free Software Foundation; either version 3, or (at your option) +// any later version. + +// This library is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU General Public License for more details. + +// You should have received a copy of the GNU General Public License along +// with this library; see the file COPYING3. If not see +// . + +// { dg-do run { target c++11 } } + +#include +#include +#include +#include + +struct T { int i; }; + +bool operator==(const T& l, const T& r) +{ return l.i == r.i; } + +struct H +{ + std::size_t + operator()(const T& t) const noexcept + { return t.i; } +}; + +struct E : std::equal_to +{ }; + +using __gnu_test::CustomPointerAlloc; + +template class std::unordered_map>>; + +void test01() +{ + typedef CustomPointerAlloc> alloc_type; + typedef std::unordered_map test_type; + test_type v; + v.insert({ T(), 0 }); + VERIFY( ++v.begin() == v.end() ); +} + +int main() +{ + test01(); +} diff --git a/libstdc++-v3/testsuite/23_containers/unordered_multimap/allocator/ext_ptr.cc b/libstdc++-v3/testsuite/23_containers/unordered_multimap/allocator/ext_ptr.cc new file mode 100644 index 00000000000..6dd62a40293 --- /dev/null +++ b/libstdc++-v3/testsuite/23_containers/unordered_multimap/allocator/ext_ptr.cc @@ -0,0 +1,57 @@ +// Copyright (C) 2021 Free Software Foundation, Inc. +// +// This file is part of the GNU ISO C++ Library. This library is free +// software; you can redistribute it and/or modify it under the +// terms of the GNU General Public License as published by the +// Free Software Foundation; either version 3, or (at your option) +// any later version. + +// This library is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU General Public License for more details. + +// You should have received a copy of the GNU General Public License along +// with this library; see the file COPYING3. If not see +// . + +// { dg-do run { target c++11 } } + +#include +#include +#include +#include + +struct T { int i; }; + +bool operator==(const T& l, const T& r) +{ return l.i == r.i; } + +struct H +{ + std::size_t + operator()(const T& t) const noexcept + { return t.i; } +}; + +struct E : std::equal_to +{ }; + +using __gnu_test::CustomPointerAlloc; + +template class std::unordered_multimap>>; + +void test01() +{ + typedef CustomPointerAlloc> alloc_type; + typedef std::unordered_multimap test_type; + test_type v; + v.insert({ T(), 0 }); + VERIFY( ++v.begin() == v.end() ); +} + +int main() +{ + test01(); +} diff --git a/libstdc++-v3/testsuite/23_containers/unordered_multiset/allocator/ext_ptr.cc b/libstdc++-v3/testsuite/23_containers/unordered_multiset/allocator/ext_ptr.cc new file mode 100644 index 00000000000..dbc7b6247a2 --- /dev/null +++ b/libstdc++-v3/testsuite/23_containers/unordered_multiset/allocator/ext_ptr.cc @@ -0,0 +1,56 @@ +// Copyright (C) 2021 Free Software Foundation, Inc. +// +// This file is part of the GNU ISO C++ Library. This library is free +// software; you can redistribute it and/or modify it under the +// terms of the GNU General Public License as published by the +// Free Software Foundation; either version 3, or (at your option) +// any later version. + +// This library is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU General Public License for more details. + +// You should have received a copy of the GNU General Public License along +// with this library; see the file COPYING3. If not see +// . + +// { dg-do run { target c++11 } } + +#include +#include +#include +#include + +struct T { int i; }; + +bool operator==(const T& l, const T& r) +{ return l.i == r.i; } + +struct H +{ + std::size_t + operator()(const T& t) const noexcept + { return t.i; } +}; + +struct E : std::equal_to +{ }; + +using __gnu_test::CustomPointerAlloc; + +template class std::unordered_multiset>; + +void test01() +{ + typedef CustomPointerAlloc alloc_type; + typedef std::unordered_multiset test_type; + test_type v; + v.insert(T()); + VERIFY( ++v.begin() == v.end() ); +} + +int main() +{ + test01(); +} diff --git a/libstdc++-v3/testsuite/23_containers/unordered_set/allocator/ext_ptr.cc b/libstdc++-v3/testsuite/23_containers/unordered_set/allocator/ext_ptr.cc index c0e6a1f53a2..88814b3009c 100644 --- a/libstdc++-v3/testsuite/23_containers/unordered_set/allocator/ext_ptr.cc +++ b/libstdc++-v3/testsuite/23_containers/unordered_set/allocator/ext_ptr.cc @@ -15,10 +15,7 @@ // with this library; see the file COPYING3. If not see // . -// This test fails to compile since C++17 (see xfail-if below) so we can only -// do a "run" test for C++11 and C++14, and a "compile" test for C++17 and up. -// { dg-do run { target { c++11_only || c++14_only } } } -// { dg-do compile { target c++17 } } +// { dg-do run { target { c++11 } } } #include #include @@ -26,15 +23,22 @@ #include struct T { int i; }; -bool operator==(const T& l, const T& r) { return l.i == r.i; } -struct H { std::size_t operator()(const T& t) const noexcept { return t.i; } + +bool operator==(const T& l, const T& r) +{ return l.i == r.i; } + +struct H +{ + std::size_t + operator()(const T& t) const noexcept + { return t.i; } }; -struct E : std::equal_to { }; + +struct E : std::equal_to +{ }; using __gnu_test::CustomPointerAlloc; -// { dg-xfail-if "node reinsertion assumes raw pointers" { c++17 } } -// TODO when removing this xfail change the test back to "dg-do run". template class std::unordered_set>; void test01() --------------F5F59F50A853E79603EBCD6C--