public inbox for libstdc++@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH][Hashtable 6/6] PR 68303 small size optimization
@ 2019-11-17 21:31 François Dumont
  2020-07-17 12:58 ` Jonathan Wakely
  0 siblings, 1 reply; 10+ messages in thread
From: François Dumont @ 2019-11-17 21:31 UTC (permalink / raw)
  To: libstdc++, gcc-patches

[-- Attachment #1: Type: text/plain, Size: 2279 bytes --]

This is an implementation of PR 68303.

I try to use this idea as much as possible to avoid computation of hash 
codes.

Note that tests are not showing any gain. I guess hash computation must 
be quite bad to get a benefit from it. So I am only activating it when 
hash code is not cached and/or when computation is not fast.

     PR libstdc++/68303
     * include/bits/hashtable_policy.h
     (_Small_size_threshold<_Hash>): New.
     (_Hashtable_traits<>): Add _Small_size_threshold std::size_t template
     parameter, default to 0.
     (_Hashtable_traits<>::__small_size_threshold): New.
     (_Hash_code_base<>::_M_hash_code(const __node_type*)): New.
     (_Equal_helper<>::_S_node_equals): New.
     * include/bits/hashtable.h:
     (__small_size_threshold_default<>): New template alias.
     (_Hashtable<>::find): Add linear lookup when size is lower or equal to
     _Small_size_threshold.
     (_Hashtable<>::_M_emplace<_Args>(true_type, _Args&&...)): Add linear
     lookup when size is lower or equal to _Small_size_threshold.
     (_Hashtable<>::_M_insert<>(_Arg&&, const _NodeGenerator&, true_type,
     size_type)): Likewise.
     (_Hashtable<>::_M_compute_hash_code(const_iterator, const key_type&)):
     New.
     (_Hashtable<>::_M_emplace<_Args>(false_type, _Args&&...)): Use latter.
     (_Hashtable<>::_M_insert(const_iterator, _Arg&&, const _NodeGenerator&,
     false_type)): Likewise.
     (_Hashtable<>::_M_find_before_node(const key_type&)): New.
     (_Hashtable<>::_M_erase(true_type, const key_type&)): Use latter if 
size
     is lower or equal to _Small_size_threshold.
     (_Hashtable<>::_M_erase(false_type, const key_type&)): Likewise.
     * include/bits/unordered_map.h (__umaps_traits): Adapt using small size
     threshold set to 20.
     (__ummap_traits): Likewise.
     * include/bits/unordered_set.h (__uset_traits, __ummset_traits): 
Likewise.
     * src/c++11/hashtable_c++0x.cc: Add <bits/functional_hash.h> include.

Tested under Linux x86_64.

François


[-- Attachment #2: hashtable#7.patch --]
[-- Type: text/x-patch, Size: 19341 bytes --]

diff --git a/libstdc++-v3/include/bits/hashtable.h b/libstdc++-v3/include/bits/hashtable.h
index 9dadc62e328..460f25affe4 100644
--- a/libstdc++-v3/include/bits/hashtable.h
+++ b/libstdc++-v3/include/bits/hashtable.h
@@ -48,6 +48,12 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 		       // Mandatory to have erase not throwing.
 		       __is_nothrow_invocable<const _Hash&, const _Tp&>>>;
 
+  template<bool __cache, typename _Hash>
+    using __small_size_threshold_default
+      = typename conditional<__cache,
+		// No small size optimization if hash code is cached...
+		integral_constant<int, 0>,
+		_Small_size_threshold<_Hash>>::type;
   /**
    *  Primary class template _Hashtable.
    *
@@ -743,6 +749,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       __node_base*
       _M_find_before_node(size_type, const key_type&, __hash_code) const;
 
+      __node_base*
+      _M_find_before_node(const key_type&);
+
       __node_type*
       _M_find_node(size_type __bkt, const key_type& __key,
 		   __hash_code __c) const
@@ -766,6 +775,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       __node_base*
       _M_get_previous_node(size_type __bkt, __node_base* __n);
 
+      pair<const_iterator, __hash_code>
+      _M_compute_hash_code(const_iterator __hint, const key_type& __k) const;
+
       // Insert node __n with hash code __code, in bucket __bkt if no
       // rehash (assumes no element with same key already present).
       // Takes ownership of __n if insertion succeeds, throws otherwise.
@@ -1490,6 +1502,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     find(const key_type& __k)
     -> iterator
     {
+      if (size() <= __traits_type::__small_size_threshold::value)
+	{
+	  for (auto __it = begin(); __it != end(); ++__it)
+	    if (this->_M_key_equals(__k, __it._M_cur))
+	      return __it;
+	  return end();
+	}
+
       __hash_code __code = this->_M_hash_code(__k);
       std::size_t __bkt = _M_bucket_index(__code);
       return iterator(_M_find_node(__bkt, __k, __code));
@@ -1504,6 +1524,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     find(const key_type& __k) const
     -> const_iterator
     {
+      if (size() <= __traits_type::__small_size_threshold::value)
+	{
+	  for (auto __it = begin(); __it != end(); ++__it)
+	    if (this->_M_key_equals(__k, __it._M_cur))
+	      return __it;
+	  return end();
+	}
+
       __hash_code __code = this->_M_hash_code(__k);
       std::size_t __bkt = _M_bucket_index(__code);
       return const_iterator(_M_find_node(__bkt, __k, __code));
@@ -1619,6 +1647,34 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       return nullptr;
     }
 
+  // Find the node before the one whose key compares equal to k.
+  // Return nullptr if no node is found.
+  template<typename _Key, typename _Value,
+	   typename _Alloc, typename _ExtractKey, typename _Equal,
+	   typename _Hash, typename _RehashPolicy, typename _Traits>
+    auto
+    _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
+	       _Hash, _RehashPolicy, _Traits>::
+    _M_find_before_node(const key_type& __k)
+    -> __node_base*
+    {
+      __node_base* __prev_p = &_M_before_begin;
+      if (!__prev_p->_M_nxt)
+	return nullptr;
+
+      for (__node_type* __p = static_cast<__node_type*>(__prev_p->_M_nxt);
+	   __p != nullptr;
+	   __p = __p->_M_next())
+	{
+	  if (this->_M_key_equals(__k, __p))
+	    return __prev_p;
+
+	  __prev_p = __p;
+	}
+
+      return nullptr;
+    }
+
   template<typename _Key, typename _Value,
 	   typename _Alloc, typename _ExtractKey, typename _Equal,
 	   typename _Hash, typename _RehashPolicy, typename _Traits>
@@ -1702,11 +1758,20 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	// First build the node to get access to the hash code
 	_Scoped_node __node { this, std::forward<_Args>(__args)...  };
 	const key_type& __k = this->_M_extract()(__node._M_node->_M_v());
+	if (size() <= __traits_type::__small_size_threshold::value)
+	  {
+	    for (auto __it = begin(); __it != end(); ++__it)
+	      if (this->_M_key_equals(__k, __it._M_cur))
+		// There is already an equivalent node, no insertion
+		return { __it, false };
+	  }
+
 	__hash_code __code = this->_M_hash_code(__k);
 	size_type __bkt = _M_bucket_index(__code);
-	if (__node_type* __p = _M_find_node(__bkt, __k, __code))
-	  // There is already an equivalent node, no insertion
-	  return std::make_pair(iterator(__p), false);
+	if (size() > __traits_type::__small_size_threshold::value)
+	  if (__node_type* __p = _M_find_node(__bkt, __k, __code))
+	    // There is already an equivalent node, no insertion
+	    return { iterator(__p), false };
 
 	// Insert the node
 	auto __pos = _M_insert_node(__uks, __bkt, __code, __node._M_node);
@@ -1728,13 +1793,40 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	_Scoped_node __node { this, std::forward<_Args>(__args)...  };
 	const key_type& __k = this->_M_extract()(__node._M_node->_M_v());
 
-	__hash_code __code = this->_M_hash_code(__k);
+	auto __res = this->_M_compute_hash_code(__hint, __k);
 	auto __pos
-	  = _M_insert_node(__mks, __hint._M_cur, __code, __node._M_node);
+	  = _M_insert_node(__mks, __res.first._M_cur, __res.second,
+			   __node._M_node);
 	__node._M_node = nullptr;
 	return __pos;
       }
 
+  template<typename _Key, typename _Value,
+	   typename _Alloc, typename _ExtractKey, typename _Equal,
+	   typename _Hash, typename _RehashPolicy, typename _Traits>
+    auto
+    _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
+	       _Hash, _RehashPolicy, _Traits>::
+    _M_compute_hash_code(const_iterator __hint, const key_type& __k) const
+    -> pair<const_iterator, __hash_code>
+    {
+      if (size() <= __traits_type::__small_size_threshold::value)
+	{
+	  if (__hint != cend())
+	    {
+	      for (auto __it = __hint; __it != cend(); ++__it)
+		if (this->_M_key_equals(__k, __it._M_cur))
+		  return { __it, this->_M_hash_code(__it._M_cur) };
+	    }
+
+	  for (auto __it = cbegin(); __it != __hint; ++__it)
+	    if (this->_M_key_equals(__k, __it._M_cur))
+	      return { __it, this->_M_hash_code(__it._M_cur) };
+	}
+
+      return { __hint, this->_M_hash_code(__k) };
+    }
+
   template<typename _Key, typename _Value,
 	   typename _Alloc, typename _ExtractKey, typename _Equal,
 	   typename _Hash, typename _RehashPolicy, typename _Traits>
@@ -1830,11 +1922,18 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       -> pair<iterator, bool>
       {
 	const key_type& __k = this->_M_extract()(__v);
+
+	if (size() <= __traits_type::__small_size_threshold::value)
+	  for (auto __it = begin(); __it != end(); ++__it)
+	    if (this->_M_key_equals(__k, __it._M_cur))
+	      return { __it, false };
+
 	__hash_code __code = this->_M_hash_code(__k);
 	size_type __bkt = _M_bucket_index(__code);
 
-	if (__node_type* __node = _M_find_node(__bkt, __k, __code))
-	  return { iterator(__node), false };
+	if (size() > __traits_type::__small_size_threshold::value)
+	  if (__node_type* __node = _M_find_node(__bkt, __k, __code))
+	    return { iterator(__node), false };
 
 	_Scoped_node __node{ __node_gen(std::forward<_Arg>(__v)), this };
 	auto __pos = _M_insert_node(__uks, __bkt, __code, __node._M_node);
@@ -1856,13 +1955,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       {
 	// First compute the hash code so that we don't do anything if it
 	// throws.
-	__hash_code __code = this->_M_hash_code(this->_M_extract()(__v));
+	auto __res
+	  = this->_M_compute_hash_code(__hint, this->_M_extract()(__v));
 
 	// Second allocate new node so that we don't rehash if it throws.
 	_Scoped_node __node{ __node_gen(std::forward<_Arg>(__v)), this };
-	const key_type& __k = this->_M_extract()(__node._M_node->_M_v());
 	auto __pos
-	  = _M_insert_node(__mks, __hint._M_cur, __code, __node._M_node);
+	  = _M_insert_node(__mks, __res.first._M_cur, __res.second,
+			   __node._M_node);
 	__node._M_node = nullptr;
 	return __pos;
       }
@@ -1922,16 +2022,33 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     _M_erase(__unique_keys_t, const key_type& __k)
     -> size_type
     {
-      __hash_code __code = this->_M_hash_code(__k);
-      std::size_t __bkt = _M_bucket_index(__code);
+      __node_base* __prev_n;
+      __node_type* __n;
+      std::size_t __bkt;
+      if (size() <= __traits_type::__small_size_threshold::value)
+	{
+	  __prev_n = _M_find_before_node(__k);
+	  if (!__prev_n)
+	    return 0;
 
-      // Look for the node before the first matching node.
-      __node_base* __prev_n = _M_find_before_node(__bkt, __k, __code);
-      if (!__prev_n)
-	return 0;
+	  // We found a matching node, erase it.
+	  __n = static_cast<__node_type*>(__prev_n->_M_nxt);
+	  __bkt = _M_bucket_index(__n);
+	}
+      else
+	{
+	  __hash_code __code = this->_M_hash_code(__k);
+	  __bkt = _M_bucket_index(__code);
+
+	  // Look for the node before the first matching node.
+	  __prev_n = _M_find_before_node(__bkt, __k, __code);
+	  if (!__prev_n)
+	    return 0;
+
+	  // We found a matching node, erase it.
+	  __n = static_cast<__node_type*>(__prev_n->_M_nxt);
+	}
 
-      // We found a matching node, erase it.
-      __node_type* __n = static_cast<__node_type*>(__prev_n->_M_nxt);
       _M_erase(__bkt, __prev_n, __n);
       return 1;
     }
@@ -1945,13 +2062,31 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     _M_erase(__multi_keys_t, const key_type& __k)
     -> size_type
     {
-      __hash_code __code = this->_M_hash_code(__k);
-      std::size_t __bkt = _M_bucket_index(__code);
+      std::size_t __bkt;
+      __node_base* __prev_n;
+      __node_type* __n;
+      if (size() <= __traits_type::__small_size_threshold::value)
+	{
+	  __prev_n = _M_find_before_node(__k);
+	  if (!__prev_n)
+	    return 0;
 
-      // Look for the node before the first matching node.
-      __node_base* __prev_n = _M_find_before_node(__bkt, __k, __code);
-      if (!__prev_n)
-	return 0;
+	  // We found a matching node, erase it.
+	  __n = static_cast<__node_type*>(__prev_n->_M_nxt);
+	  __bkt = _M_bucket_index(__n);
+	}
+      else
+	{
+	  __hash_code __code = this->_M_hash_code(__k);
+	  __bkt = _M_bucket_index(__code);
+
+	  // Look for the node before the first matching node.
+	  __prev_n = _M_find_before_node(__bkt, __k, __code);
+	  if (!__prev_n)
+	    return 0;
+
+	  __n = static_cast<__node_type*>(__prev_n->_M_nxt);
+	}
 
       // _GLIBCXX_RESOLVE_LIB_DEFECTS
       // 526. Is it undefined if a function in the standard changes
@@ -1959,7 +2094,6 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       // We use one loop to find all matching nodes and another to deallocate
       // them so that the key stays valid during the first loop. It might be
       // invalidated indirectly when destroying nodes.
-      __node_type* __n = static_cast<__node_type*>(__prev_n->_M_nxt);
       __node_type* __n_last = __n->_M_next();
       while (__n_last && this->_M_node_equals(__n, __n_last))
 	__n_last = __n_last->_M_next();
diff --git a/libstdc++-v3/include/bits/hashtable_policy.h b/libstdc++-v3/include/bits/hashtable_policy.h
index 11ea47b322e..6f329c82335 100644
--- a/libstdc++-v3/include/bits/hashtable_policy.h
+++ b/libstdc++-v3/include/bits/hashtable_policy.h
@@ -45,6 +45,19 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	   typename _Traits>
     class _Hashtable;
 
+  /**
+   *  struct _Small_size_threshold
+   *
+   *  Traits like type giving the threshold below which the hashtable will be
+   *  considered as small.
+   *
+   *  @tparam _Hash The hash functor.
+   */
+  template<typename _Hash>
+    struct _Small_size_threshold
+    : public std::integral_constant<int, __is_fast_hash<_Hash>::value ? 0 : 20>
+    { };
+
 namespace __detail
 {
   /**
@@ -203,13 +216,22 @@ namespace __detail
    *  be an arbitrary number. This is true for unordered_set and
    *  unordered_map, false for unordered_multiset and
    *  unordered_multimap.
+   *
+   *  @tparam _Small_size_threshold  Integer value. Threshold below which
+   *  hashtable will be considered as small enough to perform linear
+   *  lookup.
    */
-  template<bool _Cache_hash_code, bool _Constant_iterators, bool _Unique_keys>
+  template<bool _Cache_hash_code,
+	   bool _Constant_iterators,
+	   bool _Unique_keys,
+	   int _Small_size_threshold = 0>
     struct _Hashtable_traits
     {
       using __hash_cached = __bool_constant<_Cache_hash_code>;
       using __constant_iterators = __bool_constant<_Constant_iterators>;
       using __unique_keys = __bool_constant<_Unique_keys>;
+      using __small_size_threshold
+	= integral_constant<int, _Small_size_threshold>;
     };
 
   /**
@@ -1039,9 +1061,11 @@ namespace __detail
     struct _Rehash_base<_Key, _Value, _Alloc, _ExtractKey, _Equal,
 			_Hash, _RehashPolicy, _Traits, true_type>
     {
+    private:
       using __hashtable = _Hashtable<_Key, _Value, _Alloc, _ExtractKey,
 				     _Equal, _Hash, _RehashPolicy, _Traits>;
 
+    public:
       float
       max_load_factor() const noexcept
       {
@@ -1189,6 +1213,10 @@ namespace __detail
 	return _M_hash()(__k);
       }
 
+      __hash_code
+      _M_hash_code(const __node_type* __n) const
+      { return _M_hash_code(_M_extract()(__n->_M_v())); }
+
       std::size_t
       _M_bucket_index(__hash_code __c, std::size_t __bkt_count) const
       { return _RangedHash{}(__c, __bkt_count); }
@@ -1198,9 +1226,7 @@ namespace __detail
 	noexcept( noexcept(declval<const _Hash&>()(declval<const _Key&>()))
 		  && noexcept(declval<const _RangedHash&>()((__hash_code)0,
 							   (std::size_t)0)) )
-      {
-	return _RangedHash{}(_M_hash()(_M_extract()(__p->_M_v())), __bkt_count);
-      }
+      { return _RangedHash{}(_M_hash_code(__p), __bkt_count); }
 
       void
       _M_store_code(__node_type*, __hash_code) const
@@ -1268,6 +1294,10 @@ namespace __detail
 	return _M_hash()(__k);
       }
 
+      __hash_code
+      _M_hash_code(const __node_type* __n) const
+      { return __n->_M_hash_code; }
+
       std::size_t
       _M_bucket_index(__hash_code __c, std::size_t __bkt_count) const
       { return _RangedHash{}(__c, __bkt_count); }
@@ -1669,21 +1699,26 @@ namespace __detail
     { }
 
     bool
-    _M_equals(const _Key& __k, __hash_code __c, const __node_type* __n) const
+    _M_key_equals(const _Key& __k, const __node_type* __n) const
     {
       static_assert(__is_invocable<const _Equal&, const _Key&, const _Key&>{},
 	  "key equality predicate must be invocable with two arguments of "
 	  "key type");
+      return _M_eq()(__k, this->_M_extract()(__n->_M_v()));
+    }
+
+    bool
+    _M_equals(const _Key& __k, __hash_code __c, const __node_type* __n) const
+    {
       return _Equal_hash_code<__node_type>::_S_equals(__c, *__n)
-	&& _M_eq()(__k, this->_M_extract()(__n->_M_v()));
+	&& _M_key_equals(__k, __n);
     }
 
     bool
     _M_node_equals(const __node_type* __lhn, const __node_type* __rhn) const
     {
       return _Equal_hash_code<__node_type>::_S_node_equals(*__lhn, *__rhn)
-	&& _M_eq()(this->_M_extract()(__lhn->_M_v()),
-		   this->_M_extract()(__rhn->_M_v()));
+	&& _M_key_equals(this->_M_extract()(__lhn->_M_v()), __rhn);
     }
 
     void
diff --git a/libstdc++-v3/include/bits/unordered_map.h b/libstdc++-v3/include/bits/unordered_map.h
index 310cfd39d79..5265020f608 100644
--- a/libstdc++-v3/include/bits/unordered_map.h
+++ b/libstdc++-v3/include/bits/unordered_map.h
@@ -36,30 +36,36 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 _GLIBCXX_BEGIN_NAMESPACE_CONTAINER
 
   /// Base types for unordered_map.
-  template<bool _Cache>
-    using __umap_traits = __detail::_Hashtable_traits<_Cache, false, true>;
+  template<bool _Cache, typename _Hash>
+    using __umap_traits
+      = __detail::_Hashtable_traits<_Cache, false, true,
+			__small_size_threshold_default<_Cache, _Hash>::value>;
 
   template<typename _Key,
 	   typename _Tp,
 	   typename _Hash = hash<_Key>,
 	   typename _Pred = std::equal_to<_Key>,
 	   typename _Alloc = std::allocator<std::pair<const _Key, _Tp> >,
-	   typename _Tr = __umap_traits<__cache_default<_Key, _Hash>::value>>
+	   typename _Tr = __umap_traits<__cache_default<_Key, _Hash>::value,
+					_Hash>>
     using __umap_hashtable = _Hashtable<_Key, std::pair<const _Key, _Tp>,
                                         _Alloc, __detail::_Select1st,
 				        _Pred, _Hash,
 				        __detail::_Prime_rehash_policy, _Tr>;
 
   /// Base types for unordered_multimap.
-  template<bool _Cache>
-    using __ummap_traits = __detail::_Hashtable_traits<_Cache, false, false>;
+  template<bool _Cache, typename _Hash>
+    using __ummap_traits
+      = __detail::_Hashtable_traits<_Cache, false, false,
+			__small_size_threshold_default<_Cache, _Hash>::value>;
 
   template<typename _Key,
 	   typename _Tp,
 	   typename _Hash = hash<_Key>,
 	   typename _Pred = std::equal_to<_Key>,
 	   typename _Alloc = std::allocator<std::pair<const _Key, _Tp> >,
-	   typename _Tr = __ummap_traits<__cache_default<_Key, _Hash>::value>>
+	   typename _Tr = __ummap_traits<__cache_default<_Key, _Hash>::value,
+					 _Hash>>
     using __ummap_hashtable = _Hashtable<_Key, std::pair<const _Key, _Tp>,
 					 _Alloc, __detail::_Select1st,
 					 _Pred, _Hash,
diff --git a/libstdc++-v3/include/bits/unordered_set.h b/libstdc++-v3/include/bits/unordered_set.h
index 4319495f18b..9bfa8639faf 100644
--- a/libstdc++-v3/include/bits/unordered_set.h
+++ b/libstdc++-v3/include/bits/unordered_set.h
@@ -36,27 +36,33 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 _GLIBCXX_BEGIN_NAMESPACE_CONTAINER
 
   /// Base types for unordered_set.
-  template<bool _Cache>
-    using __uset_traits = __detail::_Hashtable_traits<_Cache, true, true>;
+  template<bool _Cache, typename _Hash>
+    using __uset_traits
+      = __detail::_Hashtable_traits<_Cache, true, true,
+			__small_size_threshold_default<_Cache, _Hash>::value>;
 
   template<typename _Value,
 	   typename _Hash = hash<_Value>,
 	   typename _Pred = std::equal_to<_Value>,
   	   typename _Alloc = std::allocator<_Value>,
-	   typename _Tr = __uset_traits<__cache_default<_Value, _Hash>::value>>
+	   typename _Tr = __uset_traits<__cache_default<_Value, _Hash>::value,
+					_Hash>>
     using __uset_hashtable = _Hashtable<_Value, _Value, _Alloc,
 					__detail::_Identity, _Pred, _Hash,
 					__detail::_Prime_rehash_policy, _Tr>;
 
   /// Base types for unordered_multiset.
-  template<bool _Cache>
-    using __umset_traits = __detail::_Hashtable_traits<_Cache, true, false>;
+  template<bool _Cache, typename _Hash>
+    using __umset_traits
+      = __detail::_Hashtable_traits<_Cache, true, false,
+			__small_size_threshold_default<_Cache, _Hash>::value>;
 
   template<typename _Value,
 	   typename _Hash = hash<_Value>,
 	   typename _Pred = std::equal_to<_Value>,
 	   typename _Alloc = std::allocator<_Value>,
-	   typename _Tr = __umset_traits<__cache_default<_Value, _Hash>::value>>
+	   typename _Tr = __umset_traits<__cache_default<_Value, _Hash>::value,
+					 _Hash>>
     using __umset_hashtable = _Hashtable<_Value, _Value, _Alloc,
 					 __detail::_Identity,
 					 _Pred, _Hash,
diff --git a/libstdc++-v3/src/c++11/hashtable_c++0x.cc b/libstdc++-v3/src/c++11/hashtable_c++0x.cc
index de437d00b56..dcf8a81fc5e 100644
--- a/libstdc++-v3/src/c++11/hashtable_c++0x.cc
+++ b/libstdc++-v3/src/c++11/hashtable_c++0x.cc
@@ -30,6 +30,7 @@
 #include <tuple>
 #include <ext/aligned_buffer.h>
 #include <ext/alloc_traits.h>
+#include <bits/functional_hash.h>
 #include <bits/hashtable_policy.h>
 
 namespace std _GLIBCXX_VISIBILITY(default)

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH][Hashtable 6/6] PR 68303 small size optimization
  2019-11-17 21:31 [PATCH][Hashtable 6/6] PR 68303 small size optimization François Dumont
@ 2020-07-17 12:58 ` Jonathan Wakely
  2021-08-16 19:03   ` François Dumont
  0 siblings, 1 reply; 10+ messages in thread
From: Jonathan Wakely @ 2020-07-17 12:58 UTC (permalink / raw)
  To: François Dumont; +Cc: libstdc++, gcc-patches

On 17/11/19 22:31 +0100, François Dumont wrote:
>This is an implementation of PR 68303.
>
>I try to use this idea as much as possible to avoid computation of 
>hash codes.
>
>Note that tests are not showing any gain. I guess hash computation 
>must be quite bad to get a benefit from it. So I am only activating it 
>when hash code is not cached and/or when computation is not fast.

If the tests don't show any benefit, why bother making the change?

Does it help the example in the PR?

>    PR libstdc++/68303
>    * include/bits/hashtable_policy.h
>    (_Small_size_threshold<_Hash>): New.
>    (_Hashtable_traits<>): Add _Small_size_threshold std::size_t template
>    parameter, default to 0.
>    (_Hashtable_traits<>::__small_size_threshold): New.
>    (_Hash_code_base<>::_M_hash_code(const __node_type*)): New.
>    (_Equal_helper<>::_S_node_equals): New.
>    * include/bits/hashtable.h:
>    (__small_size_threshold_default<>): New template alias.
>    (_Hashtable<>::find): Add linear lookup when size is lower or equal to
>    _Small_size_threshold.
>    (_Hashtable<>::_M_emplace<_Args>(true_type, _Args&&...)): Add linear
>    lookup when size is lower or equal to _Small_size_threshold.
>    (_Hashtable<>::_M_insert<>(_Arg&&, const _NodeGenerator&, true_type,
>    size_type)): Likewise.
>    (_Hashtable<>::_M_compute_hash_code(const_iterator, const key_type&)):
>    New.
>    (_Hashtable<>::_M_emplace<_Args>(false_type, _Args&&...)): Use latter.
>    (_Hashtable<>::_M_insert(const_iterator, _Arg&&, const _NodeGenerator&,
>    false_type)): Likewise.
>    (_Hashtable<>::_M_find_before_node(const key_type&)): New.
>    (_Hashtable<>::_M_erase(true_type, const key_type&)): Use latter 
>if size
>    is lower or equal to _Small_size_threshold.
>    (_Hashtable<>::_M_erase(false_type, const key_type&)): Likewise.
>    * include/bits/unordered_map.h (__umaps_traits): Adapt using small size
>    threshold set to 20.
>    (__ummap_traits): Likewise.
>    * include/bits/unordered_set.h (__uset_traits, __ummset_traits): 
>Likewise.
>    * src/c++11/hashtable_c++0x.cc: Add <bits/functional_hash.h> include.

The implementation of this seems unnecessarily complicated, and it
changes the mangled name of the _Hashtable_traits type, which changes
the mangled name of _Hashtable too.

It seems to me that instead of __traits_type::__small_size_threshold
being a typedef inside the traits class, we could just use a constexpr
function, maybe as a member of the _Hashtable_traits class template:

--- a/libstdc++-v3/include/bits/hashtable_policy.h
+++ b/libstdc++-v3/include/bits/hashtable_policy.h
@@ -201,6 +201,14 @@ namespace __detail
        using __hash_cached = __bool_constant<_Cache_hash_code>;
        using __constant_iterators = __bool_constant<_Constant_iterators>;
        using __unique_keys = __bool_constant<_Unique_keys>;
+
+      template<typename _Hash>
+       static constexpr size_t
+       __small_size_threshold()
+       {
+         return _Cache_hash_code ? 0
+           : __detail::_Small_size_threshold<_Hash>::value;
+       }
      };
  
Regarding the new _Small_size_threshold class template, do we really
need yet another customization point? Can't we just use
__is_fast_hash? e.g.

          return _Cache_hash_code || __is_fast_hash ? 0 : 20;

As an aside, it seems like a terrible design that _Hashtable_traits
doesn't actually depend on the hash function, so any time we want to
extend it to detect some other property of the hash function we need
to add extra template parameters. Far too late to fix now, but we can
extend it by adding members, not by changing its type.



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH][Hashtable 6/6] PR 68303 small size optimization
  2020-07-17 12:58 ` Jonathan Wakely
@ 2021-08-16 19:03   ` François Dumont
  2021-09-23  4:36     ` François Dumont
  0 siblings, 1 reply; 10+ messages in thread
From: François Dumont @ 2021-08-16 19:03 UTC (permalink / raw)
  To: Jonathan Wakely; +Cc: libstdc++, gcc-patches

[-- Attachment #1: Type: text/plain, Size: 5735 bytes --]

On 17/07/20 2:58 pm, Jonathan Wakely wrote:
> On 17/11/19 22:31 +0100, François Dumont wrote:
>> This is an implementation of PR 68303.
>>
>> I try to use this idea as much as possible to avoid computation of 
>> hash codes.
>>
>> Note that tests are not showing any gain. I guess hash computation 
>> must be quite bad to get a benefit from it. So I am only activating 
>> it when hash code is not cached and/or when computation is not fast.
>
> If the tests don't show any benefit, why bother making the change?

I eventually managed to demonstrate this optimization through a 
performance test case.

>
> Does it help the example in the PR?

No, the code attached to the PR just show what the user has done to put 
in place this optim on his side.

What I needed was a slow hash code computation compared to the equal 
operation. I realized that I had to use longer string to achieve this.

Moreover making this optim dependant on _Hashtable_traits::__hash_cached 
was just wrong as we cannot use the cached hash code here as the input 
is a key instance, not a node.

I introduce _Hashtable_hash_traits<_Hash> to offer a single 
customization point as this optim depends highly on the difference 
between a hash code computation and a comparison. Maybe I should put it 
at std namespace scope to ease partial specialization ?

Performance test results before the patch:

unordered_small_size.cc      std::unordered_set<int>: 1st insert      
40r   32u    8s 264000112mem    0pf
unordered_small_size.cc      std::unordered_set<int>: find/erase      
22r   22u    0s -191999808mem    0pf
unordered_small_size.cc      std::unordered_set<int>: 2nd insert      
36r   36u    0s 191999776mem    0pf
unordered_small_size.cc      std::unordered_set<int>: erase key       
25r   25u    0s -191999808mem    0pf
unordered_small_size.cc      std::unordered_set<string>: 1st insert    
  404r  244u  156s -1989936256mem    0pf
unordered_small_size.cc      std::unordered_set<string>: find/erase    
  315r  315u    0s 2061942272mem    0pf
unordered_small_size.cc      std::unordered_set<string>: 2nd insert    
  233r  233u    0s -2061942208mem    0pf
unordered_small_size.cc      std::unordered_set<string>: erase key     
  299r  298u    0s 2061942208mem    0pf

after the patch:

unordered_small_size.cc      std::unordered_set<int>: 1st insert      
41r   33u    7s 264000112mem    0pf
unordered_small_size.cc      std::unordered_set<int>: find/erase      
24r   25u    1s -191999808mem    0pf
unordered_small_size.cc      std::unordered_set<int>: 2nd insert      
34r   34u    0s 191999776mem    0pf
unordered_small_size.cc      std::unordered_set<int>: erase key       
25r   25u    0s -191999808mem    0pf
unordered_small_size.cc      std::unordered_set<string>: 1st insert    
  399r  232u  165s -1989936256mem    0pf
unordered_small_size.cc      std::unordered_set<string>: find/erase    
  196r  197u    0s 2061942272mem    0pf
unordered_small_size.cc      std::unordered_set<string>: 2nd insert    
  221r  222u    0s -2061942208mem    0pf
unordered_small_size.cc      std::unordered_set<string>: erase key     
  178r  178u    0s 2061942208mem    0pf

     libstdc++: Optimize operations on small size hashtable [PR 68303]

     When hasher is identified as slow and the number of elements is 
limited in the
     container use a brute-force loop on those elements to look for a 
given key using
     the key_equal functor. For the moment the default threshold below 
which the
     container is considered as small is 20.

     libstdc++-v3/ChangeLog:

             PR libstdc++/68303
             * include/bits/hashtable_policy.h
             (_Hashtable_hash_traits<_Hash>): New.
             (_Hash_code_base<>::_M_hash_code(const 
_Hash_node_value<>&)): New.
             (_Hashtable_base<>::_M_key_equals): New.
             (_Hashtable_base<>::_M_equals): Use latter.
             (_Hashtable_base<>::_M_key_equals_tr): New.
             (_Hashtable_base<>::_M_equals_tr): Use latter.
             * include/bits/hashtable.h
             (_Hashtable<>::__small_size_threshold()): New, use 
_Hashtable_hash_traits.
             (_Hashtable<>::find): Loop through elements to look for key 
if size is lower
             than __small_size_threshold().
             (_Hashtable<>::_M_emplace(true_type, _Args&&...)): Likewise.
             (_Hashtable<>::_M_insert_unique(_Kt&&, _Args&&, const 
_NodeGenerator&)): Likewise.
(_Hashtable<>::_M_compute_hash_code(const_iterator, const key_type&)): New.
             (_Hashtable<>::_M_emplace(const_iterator, false_type, 
_Args&&...)): Use latter.
             (_Hashtable<>::_M_find_before_node(const key_type&)): New.
             (_Hashtable<>::_M_erase(true_type, const key_type&)): Use 
latter.
             (_Hashtable<>::_M_erase(false_type, const key_type&)): 
Likewise.
             * src/c++11/hashtable_c++0x.cc: Include 
<bits/functional_hash.h>.
             * testsuite/util/testsuite_performane.h
             (report_performance): Use 9 width to display memory.
             * 
testsuite/performance/23_containers/insert_erase/unordered_small_size.cc:
             New performance test case.

Tested under Linux x86_64.

Ok to commit ?

François


[-- Attachment #2: pr68303.patch --]
[-- Type: text/x-patch, Size: 20515 bytes --]

diff --git a/libstdc++-v3/include/bits/hashtable.h b/libstdc++-v3/include/bits/hashtable.h
index 92516b81ae5..fedb5a7ec3d 100644
--- a/libstdc++-v3/include/bits/hashtable.h
+++ b/libstdc++-v3/include/bits/hashtable.h
@@ -426,6 +426,13 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       _M_uses_single_bucket() const
       { return _M_uses_single_bucket(_M_buckets); }
 
+      static constexpr size_t
+      __small_size_threshold()
+      {
+	return
+	  __detail::_Hashtable_hash_traits<_Hash>::__small_size_threshold();
+      }
+
       __hashtable_alloc&
       _M_base_alloc() { return *this; }
 
@@ -795,6 +802,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       _M_bucket_index(__hash_code __c) const
       { return __hash_code_base::_M_bucket_index(__c, _M_bucket_count); }
 
+      __node_base_ptr
+      _M_find_before_node(const key_type&);
+
       // Find and insert helper functions and types
       // Find the node before the one matching the criteria.
       __node_base_ptr
@@ -838,6 +848,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       __node_base_ptr
       _M_get_previous_node(size_type __bkt, __node_ptr __n);
 
+      pair<const_iterator, __hash_code>
+      _M_compute_hash_code(const_iterator __hint, const key_type& __k) const;
+
       // Insert node __n with hash code __code, in bucket __bkt if no
       // rehash (assumes no element with same key already present).
       // Takes ownership of __n if insertion succeeds, throws otherwise.
@@ -1124,7 +1137,6 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       void _M_rehash(size_type __bkt_count, const __rehash_state& __state);
     };
 
-
   // Definitions of class template _Hashtable's out-of-line member functions.
   template<typename _Key, typename _Value, typename _Alloc,
 	   typename _ExtractKey, typename _Equal,
@@ -1617,6 +1629,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     find(const key_type& __k)
     -> iterator
     {
+      if (size() <= __small_size_threshold())
+	{
+	  for (auto __it = begin(); __it != end(); ++__it)
+	    if (this->_M_key_equals(__k, *__it._M_cur))
+	      return __it;
+	  return end();
+	}
+
       __hash_code __code = this->_M_hash_code(__k);
       std::size_t __bkt = _M_bucket_index(__code);
       return iterator(_M_find_node(__bkt, __k, __code));
@@ -1632,6 +1652,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     find(const key_type& __k) const
     -> const_iterator
     {
+      if (size() <= __small_size_threshold())
+	{
+	  for (auto __it = begin(); __it != end(); ++__it)
+	    if (this->_M_key_equals(__k, *__it._M_cur))
+	      return __it;
+	  return end();
+	}
+
       __hash_code __code = this->_M_hash_code(__k);
       std::size_t __bkt = _M_bucket_index(__code);
       return const_iterator(_M_find_node(__bkt, __k, __code));
@@ -1844,6 +1872,35 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       }
 #endif
 
+  // Find the node before the one whose key compares equal to k.
+  // Return nullptr if no node is found.
+  template<typename _Key, typename _Value, typename _Alloc,
+	   typename _ExtractKey, typename _Equal,
+	   typename _Hash, typename _RangeHash, typename _Unused,
+	   typename _RehashPolicy, typename _Traits>
+    auto
+    _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
+	       _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
+    _M_find_before_node(const key_type& __k)
+    -> __node_base_ptr
+    {
+      __node_base_ptr __prev_p = &_M_before_begin;
+      if (!__prev_p->_M_nxt)
+	return nullptr;
+
+      for (__node_ptr __p = static_cast<__node_ptr>(__prev_p->_M_nxt);
+	   __p != nullptr;
+	   __p = __p->_M_next())
+	{
+	  if (this->_M_key_equals(__k, *__p))
+	    return __prev_p;
+
+	  __prev_p = __p;
+	}
+
+      return nullptr;
+    }
+
   // Find the node before the one whose key compares equal to k in the bucket
   // bkt. Return nullptr if no node is found.
   template<typename _Key, typename _Value, typename _Alloc,
@@ -1992,11 +2049,20 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	// First build the node to get access to the hash code
 	_Scoped_node __node { this, std::forward<_Args>(__args)...  };
 	const key_type& __k = _ExtractKey{}(__node._M_node->_M_v());
+	if (size() <= __small_size_threshold())
+	  {
+	    for (auto __it = begin(); __it != end(); ++__it)
+	      if (this->_M_key_equals(__k, *__it._M_cur))
+		// There is already an equivalent node, no insertion
+		return { __it, false };
+	  }
+
 	__hash_code __code = this->_M_hash_code(__k);
 	size_type __bkt = _M_bucket_index(__code);
+	if (size() > __small_size_threshold())
 	  if (__node_ptr __p = _M_find_node(__bkt, __k, __code))
 	    // There is already an equivalent node, no insertion
-	  return std::make_pair(iterator(__p), false);
+	    return { iterator(__p), false };
 
 	// Insert the node
 	auto __pos = _M_insert_unique_node(__bkt, __code, __node._M_node);
@@ -2020,13 +2086,41 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	_Scoped_node __node { this, std::forward<_Args>(__args)...  };
 	const key_type& __k = _ExtractKey{}(__node._M_node->_M_v());
 
-	__hash_code __code = this->_M_hash_code(__k);
+	auto __res = this->_M_compute_hash_code(__hint, __k);
 	auto __pos
-	  = _M_insert_multi_node(__hint._M_cur, __code, __node._M_node);
+	  = _M_insert_multi_node(__res.first._M_cur, __res.second,
+				 __node._M_node);
 	__node._M_node = nullptr;
 	return __pos;
       }
 
+  template<typename _Key, typename _Value, typename _Alloc,
+	   typename _ExtractKey, typename _Equal,
+	   typename _Hash, typename _RangeHash, typename _Unused,
+	   typename _RehashPolicy, typename _Traits>
+    auto
+    _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
+	       _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
+    _M_compute_hash_code(const_iterator __hint, const key_type& __k) const
+    -> pair<const_iterator, __hash_code>
+    {
+      if (size() <= __small_size_threshold())
+	{
+	  if (__hint != cend())
+	    {
+	      for (auto __it = __hint; __it != cend(); ++__it)
+		if (this->_M_key_equals(__k, *__it._M_cur))
+		  return { __it, this->_M_hash_code(*__it._M_cur) };
+	    }
+
+	  for (auto __it = cbegin(); __it != __hint; ++__it)
+	    if (this->_M_key_equals(__k, *__it._M_cur))
+	      return { __it, this->_M_hash_code(*__it._M_cur) };
+	}
+
+      return { __hint, this->_M_hash_code(__k) };
+    }
+
   template<typename _Key, typename _Value, typename _Alloc,
 	   typename _ExtractKey, typename _Equal,
 	   typename _Hash, typename _RangeHash, typename _Unused,
@@ -2125,9 +2219,15 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 		       const _NodeGenerator& __node_gen)
       -> pair<iterator, bool>
       {
+	if (size() <= __small_size_threshold())
+	  for (auto __it = begin(); __it != end(); ++__it)
+	    if (this->_M_key_equals_tr(__k, *__it._M_cur))
+	      return { __it, false };
+
 	__hash_code __code = this->_M_hash_code_tr(__k);
 	size_type __bkt = _M_bucket_index(__code);
 
+	if (size() > __small_size_threshold())
 	  if (__node_ptr __node = _M_find_node_tr(__bkt, __k, __code))
 	    return { iterator(__node), false };
 
@@ -2161,11 +2261,12 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	_Scoped_node __node{ __node_gen(std::forward<_Arg>(__v)), this };
 
 	// Second compute the hash code so that we don't rehash if it throws.
-	__hash_code __code
-	  = this->_M_hash_code(_ExtractKey{}(__node._M_node->_M_v()));
+	auto __res = this->_M_compute_hash_code(
+	  __hint, _ExtractKey{}(__node._M_node->_M_v()));
 
 	auto __pos
-	  = _M_insert_multi_node(__hint._M_cur, __code, __node._M_node);
+	  = _M_insert_multi_node(__res.first._M_cur, __res.second,
+				 __node._M_node);
 	__node._M_node = nullptr;
 	return __pos;
       }
@@ -2227,17 +2328,34 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	       _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
     _M_erase(true_type /* __uks */, const key_type& __k)
     -> size_type
+    {
+      __node_base_ptr __prev_n;
+      __node_ptr __n;
+      std::size_t __bkt;
+      if (size() <= __small_size_threshold())
+	{
+	  __prev_n = _M_find_before_node(__k);
+	  if (!__prev_n)
+	    return 0;
+
+	  // We found a matching node, erase it.
+	  __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
+	  __bkt = _M_bucket_index(*__n);
+	}
+      else
 	{
 	  __hash_code __code = this->_M_hash_code(__k);
-      std::size_t __bkt = _M_bucket_index(__code);
+	  __bkt = _M_bucket_index(__code);
 
 	  // Look for the node before the first matching node.
-      __node_base_ptr __prev_n = _M_find_before_node(__bkt, __k, __code);
+	  __prev_n = _M_find_before_node(__bkt, __k, __code);
 	  if (!__prev_n)
 	    return 0;
 
 	  // We found a matching node, erase it.
-      __node_ptr __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
+	  __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
+	}
+
       _M_erase(__bkt, __prev_n, __n);
       return 1;
     }
@@ -2251,22 +2369,39 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	       _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
     _M_erase(false_type /* __uks */, const key_type& __k)
     -> size_type
+    {
+      std::size_t __bkt;
+      __node_base_ptr __prev_n;
+      __node_ptr __n;
+      if (size() <= __small_size_threshold())
+	{
+	  __prev_n = _M_find_before_node(__k);
+	  if (!__prev_n)
+	    return 0;
+
+	  // We found a matching node, erase it.
+	  __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
+	  __bkt = _M_bucket_index(*__n);
+	}
+      else
 	{
 	  __hash_code __code = this->_M_hash_code(__k);
-      std::size_t __bkt = _M_bucket_index(__code);
+	  __bkt = _M_bucket_index(__code);
 
 	  // Look for the node before the first matching node.
-      __node_base_ptr __prev_n = _M_find_before_node(__bkt, __k, __code);
+	  __prev_n = _M_find_before_node(__bkt, __k, __code);
 	  if (!__prev_n)
 	    return 0;
 
+	  __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
+	}
+
       // _GLIBCXX_RESOLVE_LIB_DEFECTS
       // 526. Is it undefined if a function in the standard changes
       // in parameters?
       // We use one loop to find all matching nodes and another to deallocate
       // them so that the key stays valid during the first loop. It might be
       // invalidated indirectly when destroying nodes.
-      __node_ptr __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
       __node_ptr __n_last = __n->_M_next();
       while (__n_last && this->_M_node_equals(*__n, *__n_last))
 	__n_last = __n_last->_M_next();
diff --git a/libstdc++-v3/include/bits/hashtable_policy.h b/libstdc++-v3/include/bits/hashtable_policy.h
index 2130c958262..93d1962c74f 100644
--- a/libstdc++-v3/include/bits/hashtable_policy.h
+++ b/libstdc++-v3/include/bits/hashtable_policy.h
@@ -59,19 +59,19 @@ namespace __detail
 
   // Helper function: return distance(first, last) for forward
   // iterators, or 0/1 for input iterators.
-  template<class _Iterator>
+  template<typename _Iterator>
     inline typename std::iterator_traits<_Iterator>::difference_type
     __distance_fw(_Iterator __first, _Iterator __last,
 		  std::input_iterator_tag)
     { return __first != __last ? 1 : 0; }
 
-  template<class _Iterator>
+  template<typename _Iterator>
     inline typename std::iterator_traits<_Iterator>::difference_type
     __distance_fw(_Iterator __first, _Iterator __last,
 		  std::forward_iterator_tag)
     { return std::distance(__first, __last); }
 
-  template<class _Iterator>
+  template<typename _Iterator>
     inline typename std::iterator_traits<_Iterator>::difference_type
     __distance_fw(_Iterator __first, _Iterator __last)
     { return __distance_fw(__first, __last,
@@ -242,6 +242,20 @@ namespace __detail
       using __unique_keys = __bool_constant<_Unique_keys>;
     };
 
+  /**
+   *  struct _Hashtable_hash_traits
+   *
+   *  Important traits for hash tables depending on associated hasher.
+   *
+   */
+  template<typename _Hash>
+    struct _Hashtable_hash_traits
+    {
+	static constexpr std::size_t
+	__small_size_threshold()
+	{ return std::__is_fast_hash<_Hash>::value ? 0 : 20; }
+    };
+
   /**
    *  struct _Hash_node_base
    *
@@ -1121,10 +1135,12 @@ namespace __detail
 			_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits,
 			true_type /* Has load factor */>
     {
+    private:
       using __hashtable = _Hashtable<_Key, _Value, _Alloc, _ExtractKey,
 				     _Equal, _Hash, _RangeHash, _Unused,
 				     _RehashPolicy, _Traits>;
 
+    public:
       float
       max_load_factor() const noexcept
       {
@@ -1266,6 +1282,14 @@ namespace __detail
 	  return _M_hash()(__k);
 	}
 
+      __hash_code
+      _M_hash_code(const _Hash_node_value<_Value, false>& __n) const
+      { return _M_hash_code(_ExtractKey{}(__n._M_v())); }
+
+      __hash_code
+      _M_hash_code(const _Hash_node_value<_Value, true>& __n) const
+      { return __n._M_hash_code; }
+
       std::size_t
       _M_bucket_index(__hash_code __c, std::size_t __bkt_count) const
       { return _RangeHash{}(__c, __bkt_count); }
@@ -1276,17 +1300,14 @@ namespace __detail
 	noexcept( noexcept(declval<const _Hash&>()(declval<const _Key&>()))
 		  && noexcept(declval<const _RangeHash&>()((__hash_code)0,
 							   (std::size_t)0)) )
-      {
-	return _RangeHash{}(_M_hash_code(_ExtractKey{}(__n._M_v())),
-			    __bkt_count);
-      }
+      { return _M_bucket_index(_M_hash_code(__n), __bkt_count); }
 
       std::size_t
       _M_bucket_index(const _Hash_node_value<_Value, true>& __n,
 		      std::size_t __bkt_count) const
 	noexcept( noexcept(declval<const _RangeHash&>()((__hash_code)0,
 							(std::size_t)0)) )
-      { return _RangeHash{}(__n._M_hash_code, __bkt_count); }
+      { return _M_bucket_index(__n._M_hash_code, __bkt_count); }
 
       void
       _M_store_code(_Hash_node_code_cache<false>&, __hash_code) const
@@ -1646,18 +1667,20 @@ namespace __detail
       { }
 
       bool
-      _M_equals(const _Key& __k, __hash_code __c,
-		const _Hash_node_value<_Value, __hash_cached::value>& __n) const
+      _M_key_equals(const _Key& __k,
+		    const _Hash_node_value<_Value,
+					   __hash_cached::value>& __n) const
 	{
-	static_assert(__is_invocable<const _Equal&, const _Key&, const _Key&>{},
+	  static_assert(
+	    __is_invocable<const _Equal&, const _Key&, const _Key&>{},
 	    "key equality predicate must be invocable with two arguments of "
 	    "key type");
-	return _S_equals(__c, __n) && _M_eq()(__k, _ExtractKey{}(__n._M_v()));
+	  return _M_eq()(__k, _ExtractKey{}(__n._M_v()));
 	}
 
       template<typename _Kt>
 	bool
-	_M_equals_tr(const _Kt& __k, __hash_code __c,
+	_M_key_equals_tr(const _Kt& __k,
 			 const _Hash_node_value<_Value,
 					     __hash_cached::value>& __n) const
 	{
@@ -1665,16 +1688,28 @@ namespace __detail
 	    __is_invocable<const _Equal&, const _Kt&, const _Key&>{},
 	    "key equality predicate must be invocable with two arguments of "
 	    "key type");
-	  return _S_equals(__c, __n) && _M_eq()(__k, _ExtractKey{}(__n._M_v()));
+	  return _M_eq()(__k, _ExtractKey{}(__n._M_v()));
 	}
 
+      bool
+      _M_equals(const _Key& __k, __hash_code __c,
+		const _Hash_node_value<_Value, __hash_cached::value>& __n) const
+      { return _S_equals(__c, __n) && _M_key_equals(__k, __n); }
+
+      template<typename _Kt>
+	bool
+	_M_equals_tr(const _Kt& __k, __hash_code __c,
+		     const _Hash_node_value<_Value,
+					    __hash_cached::value>& __n) const
+	{ return _S_equals(__c, __n) && _M_key_equals_tr(__k, __n); }
+
       bool
       _M_node_equals(
 	const _Hash_node_value<_Value, __hash_cached::value>& __lhn,
 	const _Hash_node_value<_Value, __hash_cached::value>& __rhn) const
       {
 	return _S_node_equals(__lhn, __rhn)
-	  && _M_eq()(_ExtractKey{}(__lhn._M_v()), _ExtractKey{}(__rhn._M_v()));
+	  && _M_key_equals(_ExtractKey{}(__lhn._M_v()), __rhn);
       }
 
       void
diff --git a/libstdc++-v3/src/c++11/hashtable_c++0x.cc b/libstdc++-v3/src/c++11/hashtable_c++0x.cc
index b9dec61b2ea..116192dfcf1 100644
--- a/libstdc++-v3/src/c++11/hashtable_c++0x.cc
+++ b/libstdc++-v3/src/c++11/hashtable_c++0x.cc
@@ -30,6 +30,7 @@
 #include <tuple>
 #include <ext/aligned_buffer.h>
 #include <ext/alloc_traits.h>
+#include <bits/functional_hash.h>
 #include <bits/hashtable_policy.h>
 
 namespace std _GLIBCXX_VISIBILITY(default)
diff --git a/libstdc++-v3/testsuite/performance/23_containers/insert_erase/unordered_small_size.cc b/libstdc++-v3/testsuite/performance/23_containers/insert_erase/unordered_small_size.cc
new file mode 100644
index 00000000000..ae63c15b5da
--- /dev/null
+++ b/libstdc++-v3/testsuite/performance/23_containers/insert_erase/unordered_small_size.cc
@@ -0,0 +1,125 @@
+// { dg-do run { target c++11 } }
+
+// Copyright (C) 2021 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// <http://www.gnu.org/licenses/>.
+
+#include <string>
+#include <sstream>
+#include <vector>
+#include <unordered_set>
+#include <testsuite_performance.h>
+
+namespace
+{
+  const int nb_elements = 20;
+  const int nb_insts = 150000;
+
+  template<typename _ElemType>
+    void bench(const char* desc, const std::vector<_ElemType>& elems)
+    {
+      using namespace __gnu_test;
+
+      time_counter time;
+      resource_counter resource;
+
+      std::vector<std::unordered_set<_ElemType>> insts(nb_insts);
+      for (int j = 0; j != nb_insts; ++j)
+	insts.emplace_back();
+
+      start_counters(time, resource);
+
+      for (auto& us : insts)
+	for (int i = 0; i != nb_elements; ++i)
+	  us.insert(elems[i]);
+
+      stop_counters(time, resource);
+
+      std::ostringstream ostr;
+      ostr << desc << " 1st insert";
+      report_performance(__FILE__, ostr.str().c_str(), time, resource);
+
+      start_counters(time, resource);
+
+      for (auto& us : insts)
+	for (int i = nb_elements - 1; i >= 0; --i)
+	  {
+	    auto it = us.find(elems[i]);
+	    if (it != us.end())
+	      us.erase(it);
+	  }
+
+      stop_counters(time, resource);
+
+      ostr.str("");
+      ostr << desc << " find/erase";
+      report_performance(__FILE__, ostr.str().c_str(), time, resource);
+
+      start_counters(time, resource);
+
+      for (auto& us : insts)
+	{
+	  us.insert(elems[0]);
+	  for (int i = nb_elements - 1; i >= 0; --i)
+	    us.insert(elems[i]);
+	}
+
+      stop_counters(time, resource);
+      ostr.str("");
+      ostr << desc << " 2nd insert";
+      report_performance(__FILE__, ostr.str().c_str(), time, resource);
+
+      start_counters(time, resource);
+
+      for (auto& us : insts)
+	for (int j = nb_elements - 1; j >= 0; --j)
+	  us.erase(elems[j]);
+
+      stop_counters(time, resource);
+      ostr.str("");
+      ostr << desc << " erase key ";
+      report_performance(__FILE__, ostr.str().c_str(), time, resource);
+    }
+}
+
+int main()
+{
+  {
+    std::vector<int> elems;
+    elems.reserve(nb_elements);
+    for (int i = 0; i != nb_elements; ++i)
+      elems.push_back(i);
+
+    bench("std::unordered_set<int>:    ", elems);
+  }
+
+  {
+    std::vector<std::string> elems;
+    {
+      elems.reserve(nb_elements);
+      for (int i = 0; i != nb_elements; ++i)
+	{
+	  std::ostringstream ostr;
+	  ostr << "string #" << i << ' ' << std::string(1000, 'a' + i);
+	  elems.push_back(ostr.str());
+	}
+    }
+
+    bench("std::unordered_set<string>: ", elems);
+  }
+
+  return 0;
+}
diff --git a/libstdc++-v3/testsuite/util/testsuite_performance.h b/libstdc++-v3/testsuite/util/testsuite_performance.h
index cba3a0d4b17..4ca15ab0e71 100644
--- a/libstdc++-v3/testsuite/util/testsuite_performance.h
+++ b/libstdc++-v3/testsuite/util/testsuite_performance.h
@@ -239,7 +239,7 @@ namespace __gnu_test
     out << std::setw(4) << t.real_time() << "r" << space;
     out << std::setw(4) << t.user_time() << "u" << space;
     out << std::setw(4) << t.system_time() << "s" << space;
-    out << std::setw(8) << r.allocated_memory() << "mem" << space;
+    out << std::setw(9) << r.allocated_memory() << "mem" << space;
     out << std::setw(4) << r.hard_page_fault() << "pf" << space;
 
     out << std::endl;

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH][Hashtable 6/6] PR 68303 small size optimization
  2021-08-16 19:03   ` François Dumont
@ 2021-09-23  4:36     ` François Dumont
  2021-12-21  6:07       ` François Dumont
  0 siblings, 1 reply; 10+ messages in thread
From: François Dumont @ 2021-09-23  4:36 UTC (permalink / raw)
  To: libstdc++; +Cc: gcc-patches

Ping ?

On 16/08/21 9:03 pm, François Dumont wrote:
> On 17/07/20 2:58 pm, Jonathan Wakely wrote:
>> On 17/11/19 22:31 +0100, François Dumont wrote:
>>> This is an implementation of PR 68303.
>>>
>>> I try to use this idea as much as possible to avoid computation of 
>>> hash codes.
>>>
>>> Note that tests are not showing any gain. I guess hash computation 
>>> must be quite bad to get a benefit from it. So I am only activating 
>>> it when hash code is not cached and/or when computation is not fast.
>>
>> If the tests don't show any benefit, why bother making the change?
>
> I eventually managed to demonstrate this optimization through a 
> performance test case.
>
>>
>> Does it help the example in the PR?
>
> No, the code attached to the PR just show what the user has done to 
> put in place this optim on his side.
>
> What I needed was a slow hash code computation compared to the equal 
> operation. I realized that I had to use longer string to achieve this.
>
> Moreover making this optim dependant on 
> _Hashtable_traits::__hash_cached was just wrong as we cannot use the 
> cached hash code here as the input is a key instance, not a node.
>
> I introduce _Hashtable_hash_traits<_Hash> to offer a single 
> customization point as this optim depends highly on the difference 
> between a hash code computation and a comparison. Maybe I should put 
> it at std namespace scope to ease partial specialization ?
>
> Performance test results before the patch:
>
> unordered_small_size.cc      std::unordered_set<int>: 1st insert      
> 40r   32u    8s 264000112mem    0pf
> unordered_small_size.cc      std::unordered_set<int>: find/erase      
> 22r   22u    0s -191999808mem    0pf
> unordered_small_size.cc      std::unordered_set<int>: 2nd insert      
> 36r   36u    0s 191999776mem    0pf
> unordered_small_size.cc      std::unordered_set<int>: erase key       
> 25r   25u    0s -191999808mem    0pf
> unordered_small_size.cc      std::unordered_set<string>: 1st insert    
>  404r  244u  156s -1989936256mem    0pf
> unordered_small_size.cc      std::unordered_set<string>: find/erase    
>  315r  315u    0s 2061942272mem    0pf
> unordered_small_size.cc      std::unordered_set<string>: 2nd insert    
>  233r  233u    0s -2061942208mem    0pf
> unordered_small_size.cc      std::unordered_set<string>: erase key     
>  299r  298u    0s 2061942208mem    0pf
>
> after the patch:
>
> unordered_small_size.cc      std::unordered_set<int>: 1st insert      
> 41r   33u    7s 264000112mem    0pf
> unordered_small_size.cc      std::unordered_set<int>: find/erase      
> 24r   25u    1s -191999808mem    0pf
> unordered_small_size.cc      std::unordered_set<int>: 2nd insert      
> 34r   34u    0s 191999776mem    0pf
> unordered_small_size.cc      std::unordered_set<int>: erase key       
> 25r   25u    0s -191999808mem    0pf
> unordered_small_size.cc      std::unordered_set<string>: 1st insert    
>  399r  232u  165s -1989936256mem    0pf
> unordered_small_size.cc      std::unordered_set<string>: find/erase    
>  196r  197u    0s 2061942272mem    0pf
> unordered_small_size.cc      std::unordered_set<string>: 2nd insert    
>  221r  222u    0s -2061942208mem    0pf
> unordered_small_size.cc      std::unordered_set<string>: erase key     
>  178r  178u    0s 2061942208mem    0pf
>
>     libstdc++: Optimize operations on small size hashtable [PR 68303]
>
>     When hasher is identified as slow and the number of elements is 
> limited in the
>     container use a brute-force loop on those elements to look for a 
> given key using
>     the key_equal functor. For the moment the default threshold below 
> which the
>     container is considered as small is 20.
>
>     libstdc++-v3/ChangeLog:
>
>             PR libstdc++/68303
>             * include/bits/hashtable_policy.h
>             (_Hashtable_hash_traits<_Hash>): New.
>             (_Hash_code_base<>::_M_hash_code(const 
> _Hash_node_value<>&)): New.
>             (_Hashtable_base<>::_M_key_equals): New.
>             (_Hashtable_base<>::_M_equals): Use latter.
>             (_Hashtable_base<>::_M_key_equals_tr): New.
>             (_Hashtable_base<>::_M_equals_tr): Use latter.
>             * include/bits/hashtable.h
>             (_Hashtable<>::__small_size_threshold()): New, use 
> _Hashtable_hash_traits.
>             (_Hashtable<>::find): Loop through elements to look for 
> key if size is lower
>             than __small_size_threshold().
>             (_Hashtable<>::_M_emplace(true_type, _Args&&...)): Likewise.
>             (_Hashtable<>::_M_insert_unique(_Kt&&, _Args&&, const 
> _NodeGenerator&)): Likewise.
> (_Hashtable<>::_M_compute_hash_code(const_iterator, const key_type&)): 
> New.
>             (_Hashtable<>::_M_emplace(const_iterator, false_type, 
> _Args&&...)): Use latter.
>             (_Hashtable<>::_M_find_before_node(const key_type&)): New.
>             (_Hashtable<>::_M_erase(true_type, const key_type&)): Use 
> latter.
>             (_Hashtable<>::_M_erase(false_type, const key_type&)): 
> Likewise.
>             * src/c++11/hashtable_c++0x.cc: Include 
> <bits/functional_hash.h>.
>             * testsuite/util/testsuite_performane.h
>             (report_performance): Use 9 width to display memory.
>             * 
> testsuite/performance/23_containers/insert_erase/unordered_small_size.cc:
>             New performance test case.
>
> Tested under Linux x86_64.
>
> Ok to commit ?
>
> François
>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH][Hashtable 6/6] PR 68303 small size optimization
  2021-09-23  4:36     ` François Dumont
@ 2021-12-21  6:07       ` François Dumont
  2021-12-21  6:28         ` Daniel Krügler
       [not found]         ` <YcRzoSSc534Lg+/F@redhat.com>
  0 siblings, 2 replies; 10+ messages in thread
From: François Dumont @ 2021-12-21  6:07 UTC (permalink / raw)
  To: libstdc++; +Cc: gcc-patches

[-- Attachment #1: Type: text/plain, Size: 6248 bytes --]

Hi

     Is there a chance for this patch to be integrated for next gcc 
release ?

François


On 23/09/21 6:36 am, François Dumont wrote:
> Ping ?
>
> On 16/08/21 9:03 pm, François Dumont wrote:
>> On 17/07/20 2:58 pm, Jonathan Wakely wrote:
>>> On 17/11/19 22:31 +0100, François Dumont wrote:
>>>> This is an implementation of PR 68303.
>>>>
>>>> I try to use this idea as much as possible to avoid computation of 
>>>> hash codes.
>>>>
>>>> Note that tests are not showing any gain. I guess hash computation 
>>>> must be quite bad to get a benefit from it. So I am only activating 
>>>> it when hash code is not cached and/or when computation is not fast.
>>>
>>> If the tests don't show any benefit, why bother making the change?
>>
>> I eventually managed to demonstrate this optimization through a 
>> performance test case.
>>
>>>
>>> Does it help the example in the PR?
>>
>> No, the code attached to the PR just show what the user has done to 
>> put in place this optim on his side.
>>
>> What I needed was a slow hash code computation compared to the equal 
>> operation. I realized that I had to use longer string to achieve this.
>>
>> Moreover making this optim dependant on 
>> _Hashtable_traits::__hash_cached was just wrong as we cannot use the 
>> cached hash code here as the input is a key instance, not a node.
>>
>> I introduce _Hashtable_hash_traits<_Hash> to offer a single 
>> customization point as this optim depends highly on the difference 
>> between a hash code computation and a comparison. Maybe I should put 
>> it at std namespace scope to ease partial specialization ?
>>
>> Performance test results before the patch:
>>
>> unordered_small_size.cc      std::unordered_set<int>: 1st insert      
>> 40r   32u    8s 264000112mem    0pf
>> unordered_small_size.cc      std::unordered_set<int>: find/erase      
>> 22r   22u    0s -191999808mem    0pf
>> unordered_small_size.cc      std::unordered_set<int>: 2nd insert      
>> 36r   36u    0s 191999776mem    0pf
>> unordered_small_size.cc      std::unordered_set<int>: erase key       
>> 25r   25u    0s -191999808mem    0pf
>> unordered_small_size.cc      std::unordered_set<string>: 1st 
>> insert     404r  244u  156s -1989936256mem    0pf
>> unordered_small_size.cc      std::unordered_set<string>: 
>> find/erase     315r  315u    0s 2061942272mem    0pf
>> unordered_small_size.cc      std::unordered_set<string>: 2nd 
>> insert     233r  233u    0s -2061942208mem    0pf
>> unordered_small_size.cc      std::unordered_set<string>: erase key 
>>      299r  298u    0s 2061942208mem    0pf
>>
>> after the patch:
>>
>> unordered_small_size.cc      std::unordered_set<int>: 1st insert      
>> 41r   33u    7s 264000112mem    0pf
>> unordered_small_size.cc      std::unordered_set<int>: find/erase      
>> 24r   25u    1s -191999808mem    0pf
>> unordered_small_size.cc      std::unordered_set<int>: 2nd insert      
>> 34r   34u    0s 191999776mem    0pf
>> unordered_small_size.cc      std::unordered_set<int>: erase key       
>> 25r   25u    0s -191999808mem    0pf
>> unordered_small_size.cc      std::unordered_set<string>: 1st 
>> insert     399r  232u  165s -1989936256mem    0pf
>> unordered_small_size.cc      std::unordered_set<string>: 
>> find/erase     196r  197u    0s 2061942272mem    0pf
>> unordered_small_size.cc      std::unordered_set<string>: 2nd 
>> insert     221r  222u    0s -2061942208mem    0pf
>> unordered_small_size.cc      std::unordered_set<string>: erase key 
>>      178r  178u    0s 2061942208mem    0pf
>>
>>     libstdc++: Optimize operations on small size hashtable [PR 68303]
>>
>>     When hasher is identified as slow and the number of elements is 
>> limited in the
>>     container use a brute-force loop on those elements to look for a 
>> given key using
>>     the key_equal functor. For the moment the default threshold below 
>> which the
>>     container is considered as small is 20.
>>
>>     libstdc++-v3/ChangeLog:
>>
>>             PR libstdc++/68303
>>             * include/bits/hashtable_policy.h
>>             (_Hashtable_hash_traits<_Hash>): New.
>>             (_Hash_code_base<>::_M_hash_code(const 
>> _Hash_node_value<>&)): New.
>>             (_Hashtable_base<>::_M_key_equals): New.
>>             (_Hashtable_base<>::_M_equals): Use latter.
>>             (_Hashtable_base<>::_M_key_equals_tr): New.
>>             (_Hashtable_base<>::_M_equals_tr): Use latter.
>>             * include/bits/hashtable.h
>>             (_Hashtable<>::__small_size_threshold()): New, use 
>> _Hashtable_hash_traits.
>>             (_Hashtable<>::find): Loop through elements to look for 
>> key if size is lower
>>             than __small_size_threshold().
>>             (_Hashtable<>::_M_emplace(true_type, _Args&&...)): Likewise.
>>             (_Hashtable<>::_M_insert_unique(_Kt&&, _Args&&, const 
>> _NodeGenerator&)): Likewise.
>> (_Hashtable<>::_M_compute_hash_code(const_iterator, const 
>> key_type&)): New.
>>             (_Hashtable<>::_M_emplace(const_iterator, false_type, 
>> _Args&&...)): Use latter.
>>             (_Hashtable<>::_M_find_before_node(const key_type&)): New.
>>             (_Hashtable<>::_M_erase(true_type, const key_type&)): Use 
>> latter.
>>             (_Hashtable<>::_M_erase(false_type, const key_type&)): 
>> Likewise.
>>             * src/c++11/hashtable_c++0x.cc: Include 
>> <bits/functional_hash.h>.
>>             * testsuite/util/testsuite_performane.h
>>             (report_performance): Use 9 width to display memory.
>>             * 
>> testsuite/performance/23_containers/insert_erase/unordered_small_size.cc:
>>             New performance test case.
>>
>> Tested under Linux x86_64.
>>
>> Ok to commit ?
>>
>> François
>>
>


[-- Attachment #2: pr68303.patch --]
[-- Type: text/x-patch, Size: 19625 bytes --]

diff --git a/libstdc++-v3/include/bits/hashtable.h b/libstdc++-v3/include/bits/hashtable.h
index 6e2d4c10cfe..6b2c6b99fef 100644
--- a/libstdc++-v3/include/bits/hashtable.h
+++ b/libstdc++-v3/include/bits/hashtable.h
@@ -419,6 +419,13 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       _M_uses_single_bucket() const
       { return _M_uses_single_bucket(_M_buckets); }
 
+      static constexpr size_t
+      __small_size_threshold()
+      {
+	return
+	  __detail::_Hashtable_hash_traits<_Hash>::__small_size_threshold();
+      }
+
       __hashtable_alloc&
       _M_base_alloc() { return *this; }
 
@@ -788,6 +795,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       _M_bucket_index(__hash_code __c) const
       { return __hash_code_base::_M_bucket_index(__c, _M_bucket_count); }
 
+      __node_base_ptr
+      _M_find_before_node(const key_type&);
+
       // Find and insert helper functions and types
       // Find the node before the one matching the criteria.
       __node_base_ptr
@@ -831,6 +841,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       __node_base_ptr
       _M_get_previous_node(size_type __bkt, __node_ptr __n);
 
+      pair<const_iterator, __hash_code>
+      _M_compute_hash_code(const_iterator __hint, const key_type& __k) const;
+
       // Insert node __n with hash code __code, in bucket __bkt if no
       // rehash (assumes no element with same key already present).
       // Takes ownership of __n if insertion succeeds, throws otherwise.
@@ -1126,7 +1139,6 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       void _M_rehash(size_type __bkt_count, const __rehash_state& __state);
     };
 
-
   // Definitions of class template _Hashtable's out-of-line member functions.
   template<typename _Key, typename _Value, typename _Alloc,
 	   typename _ExtractKey, typename _Equal,
@@ -1628,6 +1640,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     find(const key_type& __k)
     -> iterator
     {
+      if (size() <= __small_size_threshold())
+	{
+	  for (auto __it = begin(); __it != end(); ++__it)
+	    if (this->_M_key_equals(__k, *__it._M_cur))
+	      return __it;
+	  return end();
+	}
+
       __hash_code __code = this->_M_hash_code(__k);
       std::size_t __bkt = _M_bucket_index(__code);
       return iterator(_M_find_node(__bkt, __k, __code));
@@ -1643,6 +1663,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     find(const key_type& __k) const
     -> const_iterator
     {
+      if (size() <= __small_size_threshold())
+	{
+	  for (auto __it = begin(); __it != end(); ++__it)
+	    if (this->_M_key_equals(__k, *__it._M_cur))
+	      return __it;
+	  return end();
+	}
+
       __hash_code __code = this->_M_hash_code(__k);
       std::size_t __bkt = _M_bucket_index(__code);
       return const_iterator(_M_find_node(__bkt, __k, __code));
@@ -1855,6 +1883,35 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       }
 #endif
 
+  // Find the node before the one whose key compares equal to k.
+  // Return nullptr if no node is found.
+  template<typename _Key, typename _Value, typename _Alloc,
+	   typename _ExtractKey, typename _Equal,
+	   typename _Hash, typename _RangeHash, typename _Unused,
+	   typename _RehashPolicy, typename _Traits>
+    auto
+    _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
+	       _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
+    _M_find_before_node(const key_type& __k)
+    -> __node_base_ptr
+    {
+      __node_base_ptr __prev_p = &_M_before_begin;
+      if (!__prev_p->_M_nxt)
+	return nullptr;
+
+      for (__node_ptr __p = static_cast<__node_ptr>(__prev_p->_M_nxt);
+	   __p != nullptr;
+	   __p = __p->_M_next())
+	{
+	  if (this->_M_key_equals(__k, *__p))
+	    return __prev_p;
+
+	  __prev_p = __p;
+	}
+
+      return nullptr;
+    }
+
   // Find the node before the one whose key compares equal to k in the bucket
   // bkt. Return nullptr if no node is found.
   template<typename _Key, typename _Value, typename _Alloc,
@@ -2003,11 +2060,20 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	// First build the node to get access to the hash code
 	_Scoped_node __node { this, std::forward<_Args>(__args)...  };
 	const key_type& __k = _ExtractKey{}(__node._M_node->_M_v());
+	if (size() <= __small_size_threshold())
+	  {
+	    for (auto __it = begin(); __it != end(); ++__it)
+	      if (this->_M_key_equals(__k, *__it._M_cur))
+		// There is already an equivalent node, no insertion
+		return { __it, false };
+	  }
+
 	__hash_code __code = this->_M_hash_code(__k);
 	size_type __bkt = _M_bucket_index(__code);
+	if (size() > __small_size_threshold())
 	  if (__node_ptr __p = _M_find_node(__bkt, __k, __code))
 	    // There is already an equivalent node, no insertion
-	  return std::make_pair(iterator(__p), false);
+	    return { iterator(__p), false };
 
 	// Insert the node
 	auto __pos = _M_insert_unique_node(__bkt, __code, __node._M_node);
@@ -2031,13 +2097,41 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	_Scoped_node __node { this, std::forward<_Args>(__args)...  };
 	const key_type& __k = _ExtractKey{}(__node._M_node->_M_v());
 
-	__hash_code __code = this->_M_hash_code(__k);
+	auto __res = this->_M_compute_hash_code(__hint, __k);
 	auto __pos
-	  = _M_insert_multi_node(__hint._M_cur, __code, __node._M_node);
+	  = _M_insert_multi_node(__res.first._M_cur, __res.second,
+				 __node._M_node);
 	__node._M_node = nullptr;
 	return __pos;
       }
 
+  template<typename _Key, typename _Value, typename _Alloc,
+	   typename _ExtractKey, typename _Equal,
+	   typename _Hash, typename _RangeHash, typename _Unused,
+	   typename _RehashPolicy, typename _Traits>
+    auto
+    _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
+	       _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
+    _M_compute_hash_code(const_iterator __hint, const key_type& __k) const
+    -> pair<const_iterator, __hash_code>
+    {
+      if (size() <= __small_size_threshold())
+	{
+	  if (__hint != cend())
+	    {
+	      for (auto __it = __hint; __it != cend(); ++__it)
+		if (this->_M_key_equals(__k, *__it._M_cur))
+		  return { __it, this->_M_hash_code(*__it._M_cur) };
+	    }
+
+	  for (auto __it = cbegin(); __it != __hint; ++__it)
+	    if (this->_M_key_equals(__k, *__it._M_cur))
+	      return { __it, this->_M_hash_code(*__it._M_cur) };
+	}
+
+      return { __hint, this->_M_hash_code(__k) };
+    }
+
   template<typename _Key, typename _Value, typename _Alloc,
 	   typename _ExtractKey, typename _Equal,
 	   typename _Hash, typename _RangeHash, typename _Unused,
@@ -2136,9 +2230,15 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 		       const _NodeGenerator& __node_gen)
       -> pair<iterator, bool>
       {
+	if (size() <= __small_size_threshold())
+	  for (auto __it = begin(); __it != end(); ++__it)
+	    if (this->_M_key_equals_tr(__k, *__it._M_cur))
+	      return { __it, false };
+
 	__hash_code __code = this->_M_hash_code_tr(__k);
 	size_type __bkt = _M_bucket_index(__code);
 
+	if (size() > __small_size_threshold())
 	  if (__node_ptr __node = _M_find_node_tr(__bkt, __k, __code))
 	    return { iterator(__node), false };
 
@@ -2172,11 +2272,12 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	_Scoped_node __node{ __node_gen(std::forward<_Arg>(__v)), this };
 
 	// Second compute the hash code so that we don't rehash if it throws.
-	__hash_code __code
-	  = this->_M_hash_code(_ExtractKey{}(__node._M_node->_M_v()));
+	auto __res = this->_M_compute_hash_code(
+	  __hint, _ExtractKey{}(__node._M_node->_M_v()));
 
 	auto __pos
-	  = _M_insert_multi_node(__hint._M_cur, __code, __node._M_node);
+	  = _M_insert_multi_node(__res.first._M_cur, __res.second,
+				 __node._M_node);
 	__node._M_node = nullptr;
 	return __pos;
       }
@@ -2238,17 +2339,34 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	       _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
     _M_erase(true_type /* __uks */, const key_type& __k)
     -> size_type
+    {
+      __node_base_ptr __prev_n;
+      __node_ptr __n;
+      std::size_t __bkt;
+      if (size() <= __small_size_threshold())
+	{
+	  __prev_n = _M_find_before_node(__k);
+	  if (!__prev_n)
+	    return 0;
+
+	  // We found a matching node, erase it.
+	  __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
+	  __bkt = _M_bucket_index(*__n);
+	}
+      else
 	{
 	  __hash_code __code = this->_M_hash_code(__k);
-      std::size_t __bkt = _M_bucket_index(__code);
+	  __bkt = _M_bucket_index(__code);
 
 	  // Look for the node before the first matching node.
-      __node_base_ptr __prev_n = _M_find_before_node(__bkt, __k, __code);
+	  __prev_n = _M_find_before_node(__bkt, __k, __code);
 	  if (!__prev_n)
 	    return 0;
 
 	  // We found a matching node, erase it.
-      __node_ptr __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
+	  __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
+	}
+
       _M_erase(__bkt, __prev_n, __n);
       return 1;
     }
@@ -2262,22 +2380,39 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	       _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
     _M_erase(false_type /* __uks */, const key_type& __k)
     -> size_type
+    {
+      std::size_t __bkt;
+      __node_base_ptr __prev_n;
+      __node_ptr __n;
+      if (size() <= __small_size_threshold())
+	{
+	  __prev_n = _M_find_before_node(__k);
+	  if (!__prev_n)
+	    return 0;
+
+	  // We found a matching node, erase it.
+	  __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
+	  __bkt = _M_bucket_index(*__n);
+	}
+      else
 	{
 	  __hash_code __code = this->_M_hash_code(__k);
-      std::size_t __bkt = _M_bucket_index(__code);
+	  __bkt = _M_bucket_index(__code);
 
 	  // Look for the node before the first matching node.
-      __node_base_ptr __prev_n = _M_find_before_node(__bkt, __k, __code);
+	  __prev_n = _M_find_before_node(__bkt, __k, __code);
 	  if (!__prev_n)
 	    return 0;
 
+	  __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
+	}
+
       // _GLIBCXX_RESOLVE_LIB_DEFECTS
       // 526. Is it undefined if a function in the standard changes
       // in parameters?
       // We use one loop to find all matching nodes and another to deallocate
       // them so that the key stays valid during the first loop. It might be
       // invalidated indirectly when destroying nodes.
-      __node_ptr __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
       __node_ptr __n_last = __n->_M_next();
       while (__n_last && this->_M_node_equals(*__n, *__n_last))
 	__n_last = __n_last->_M_next();
diff --git a/libstdc++-v3/include/bits/hashtable_policy.h b/libstdc++-v3/include/bits/hashtable_policy.h
index 0b5443fc187..b4a8af081ce 100644
--- a/libstdc++-v3/include/bits/hashtable_policy.h
+++ b/libstdc++-v3/include/bits/hashtable_policy.h
@@ -246,6 +246,20 @@ namespace __detail
       using __unique_keys = __bool_constant<_Unique_keys>;
     };
 
+  /**
+   *  struct _Hashtable_hash_traits
+   *
+   *  Important traits for hash tables depending on associated hasher.
+   *
+   */
+  template<typename _Hash>
+    struct _Hashtable_hash_traits
+    {
+      static constexpr std::size_t
+      __small_size_threshold()
+      { return std::__is_fast_hash<_Hash>::value ? 0 : 20; }
+    };
+
   /**
    *  struct _Hash_node_base
    *
@@ -1105,10 +1119,12 @@ namespace __detail
 			_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits,
 			true_type /* Has load factor */>
     {
+    private:
       using __hashtable = _Hashtable<_Key, _Value, _Alloc, _ExtractKey,
 				     _Equal, _Hash, _RangeHash, _Unused,
 				     _RehashPolicy, _Traits>;
 
+    public:
       float
       max_load_factor() const noexcept
       {
@@ -1263,6 +1279,14 @@ namespace __detail
 		const _Hash_node_value<_Value, __cache_hash_code>& __n) const
 	{ return _M_hash_code(_ExtractKey{}(__n._M_v())); }
 
+      __hash_code
+      _M_hash_code(const _Hash_node_value<_Value, false>& __n) const
+      { return _M_hash_code(_ExtractKey{}(__n._M_v())); }
+
+      __hash_code
+      _M_hash_code(const _Hash_node_value<_Value, true>& __n) const
+      { return __n._M_hash_code; }
+
       std::size_t
       _M_bucket_index(__hash_code __c, std::size_t __bkt_count) const
       { return _RangeHash{}(__c, __bkt_count); }
@@ -1273,17 +1297,14 @@ namespace __detail
 	noexcept( noexcept(declval<const _Hash&>()(declval<const _Key&>()))
 		  && noexcept(declval<const _RangeHash&>()((__hash_code)0,
 							   (std::size_t)0)) )
-      {
-	return _RangeHash{}(_M_hash_code(_ExtractKey{}(__n._M_v())),
-			    __bkt_count);
-      }
+      { return _M_bucket_index(_M_hash_code(__n), __bkt_count); }
 
       std::size_t
       _M_bucket_index(const _Hash_node_value<_Value, true>& __n,
 		      std::size_t __bkt_count) const
 	noexcept( noexcept(declval<const _RangeHash&>()((__hash_code)0,
 							(std::size_t)0)) )
-      { return _RangeHash{}(__n._M_hash_code, __bkt_count); }
+      { return _M_bucket_index(__n._M_hash_code, __bkt_count); }
 
       void
       _M_store_code(_Hash_node_code_cache<false>&, __hash_code) const
@@ -1641,18 +1662,19 @@ namespace __detail
       { }
 
       bool
-      _M_equals(const _Key& __k, __hash_code __c,
-		const _Hash_node_value<_Value, __hash_cached::value>& __n) const
+      _M_key_equals(const _Key& __k,
+		    const _Hash_node_value<_Value,
+					   __hash_cached::value>& __n) const
       {
 	static_assert(__is_invocable<const _Equal&, const _Key&, const _Key&>{},
 	  "key equality predicate must be invocable with two arguments of "
 	  "key type");
-	return _S_equals(__c, __n) && _M_eq()(__k, _ExtractKey{}(__n._M_v()));
+	return _M_eq()(__k, _ExtractKey{}(__n._M_v()));
       }
 
       template<typename _Kt>
 	bool
-	_M_equals_tr(const _Kt& __k, __hash_code __c,
+	_M_key_equals_tr(const _Kt& __k,
 			 const _Hash_node_value<_Value,
 					     __hash_cached::value>& __n) const
 	{
@@ -1660,16 +1682,28 @@ namespace __detail
 	    __is_invocable<const _Equal&, const _Kt&, const _Key&>{},
 	    "key equality predicate must be invocable with two arguments of "
 	    "key type");
-	  return _S_equals(__c, __n) && _M_eq()(__k, _ExtractKey{}(__n._M_v()));
+	  return _M_eq()(__k, _ExtractKey{}(__n._M_v()));
 	}
 
+      bool
+      _M_equals(const _Key& __k, __hash_code __c,
+		const _Hash_node_value<_Value, __hash_cached::value>& __n) const
+      { return _S_equals(__c, __n) && _M_key_equals(__k, __n); }
+
+      template<typename _Kt>
+	bool
+	_M_equals_tr(const _Kt& __k, __hash_code __c,
+		     const _Hash_node_value<_Value,
+					    __hash_cached::value>& __n) const
+	{ return _S_equals(__c, __n) && _M_key_equals_tr(__k, __n); }
+
       bool
       _M_node_equals(
 	const _Hash_node_value<_Value, __hash_cached::value>& __lhn,
 	const _Hash_node_value<_Value, __hash_cached::value>& __rhn) const
       {
 	return _S_node_equals(__lhn, __rhn)
-	  && _M_eq()(_ExtractKey{}(__lhn._M_v()), _ExtractKey{}(__rhn._M_v()));
+	  && _M_key_equals(_ExtractKey{}(__lhn._M_v()), __rhn);
       }
 
       void
diff --git a/libstdc++-v3/src/c++11/hashtable_c++0x.cc b/libstdc++-v3/src/c++11/hashtable_c++0x.cc
index b9dec61b2ea..116192dfcf1 100644
--- a/libstdc++-v3/src/c++11/hashtable_c++0x.cc
+++ b/libstdc++-v3/src/c++11/hashtable_c++0x.cc
@@ -30,6 +30,7 @@
 #include <tuple>
 #include <ext/aligned_buffer.h>
 #include <ext/alloc_traits.h>
+#include <bits/functional_hash.h>
 #include <bits/hashtable_policy.h>
 
 namespace std _GLIBCXX_VISIBILITY(default)
diff --git a/libstdc++-v3/testsuite/performance/23_containers/insert_erase/unordered_small_size.cc b/libstdc++-v3/testsuite/performance/23_containers/insert_erase/unordered_small_size.cc
new file mode 100644
index 00000000000..ae63c15b5da
--- /dev/null
+++ b/libstdc++-v3/testsuite/performance/23_containers/insert_erase/unordered_small_size.cc
@@ -0,0 +1,125 @@
+// { dg-do run { target c++11 } }
+
+// Copyright (C) 2021 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// <http://www.gnu.org/licenses/>.
+
+#include <string>
+#include <sstream>
+#include <vector>
+#include <unordered_set>
+#include <testsuite_performance.h>
+
+namespace
+{
+  const int nb_elements = 20;
+  const int nb_insts = 150000;
+
+  template<typename _ElemType>
+    void bench(const char* desc, const std::vector<_ElemType>& elems)
+    {
+      using namespace __gnu_test;
+
+      time_counter time;
+      resource_counter resource;
+
+      std::vector<std::unordered_set<_ElemType>> insts(nb_insts);
+      for (int j = 0; j != nb_insts; ++j)
+	insts.emplace_back();
+
+      start_counters(time, resource);
+
+      for (auto& us : insts)
+	for (int i = 0; i != nb_elements; ++i)
+	  us.insert(elems[i]);
+
+      stop_counters(time, resource);
+
+      std::ostringstream ostr;
+      ostr << desc << " 1st insert";
+      report_performance(__FILE__, ostr.str().c_str(), time, resource);
+
+      start_counters(time, resource);
+
+      for (auto& us : insts)
+	for (int i = nb_elements - 1; i >= 0; --i)
+	  {
+	    auto it = us.find(elems[i]);
+	    if (it != us.end())
+	      us.erase(it);
+	  }
+
+      stop_counters(time, resource);
+
+      ostr.str("");
+      ostr << desc << " find/erase";
+      report_performance(__FILE__, ostr.str().c_str(), time, resource);
+
+      start_counters(time, resource);
+
+      for (auto& us : insts)
+	{
+	  us.insert(elems[0]);
+	  for (int i = nb_elements - 1; i >= 0; --i)
+	    us.insert(elems[i]);
+	}
+
+      stop_counters(time, resource);
+      ostr.str("");
+      ostr << desc << " 2nd insert";
+      report_performance(__FILE__, ostr.str().c_str(), time, resource);
+
+      start_counters(time, resource);
+
+      for (auto& us : insts)
+	for (int j = nb_elements - 1; j >= 0; --j)
+	  us.erase(elems[j]);
+
+      stop_counters(time, resource);
+      ostr.str("");
+      ostr << desc << " erase key ";
+      report_performance(__FILE__, ostr.str().c_str(), time, resource);
+    }
+}
+
+int main()
+{
+  {
+    std::vector<int> elems;
+    elems.reserve(nb_elements);
+    for (int i = 0; i != nb_elements; ++i)
+      elems.push_back(i);
+
+    bench("std::unordered_set<int>:    ", elems);
+  }
+
+  {
+    std::vector<std::string> elems;
+    {
+      elems.reserve(nb_elements);
+      for (int i = 0; i != nb_elements; ++i)
+	{
+	  std::ostringstream ostr;
+	  ostr << "string #" << i << ' ' << std::string(1000, 'a' + i);
+	  elems.push_back(ostr.str());
+	}
+    }
+
+    bench("std::unordered_set<string>: ", elems);
+  }
+
+  return 0;
+}
diff --git a/libstdc++-v3/testsuite/util/testsuite_performance.h b/libstdc++-v3/testsuite/util/testsuite_performance.h
index cba3a0d4b17..4ca15ab0e71 100644
--- a/libstdc++-v3/testsuite/util/testsuite_performance.h
+++ b/libstdc++-v3/testsuite/util/testsuite_performance.h
@@ -239,7 +239,7 @@ namespace __gnu_test
     out << std::setw(4) << t.real_time() << "r" << space;
     out << std::setw(4) << t.user_time() << "u" << space;
     out << std::setw(4) << t.system_time() << "s" << space;
-    out << std::setw(8) << r.allocated_memory() << "mem" << space;
+    out << std::setw(9) << r.allocated_memory() << "mem" << space;
     out << std::setw(4) << r.hard_page_fault() << "pf" << space;
 
     out << std::endl;

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH][Hashtable 6/6] PR 68303 small size optimization
  2021-12-21  6:07       ` François Dumont
@ 2021-12-21  6:28         ` Daniel Krügler
  2021-12-21 17:55           ` François Dumont
       [not found]         ` <YcRzoSSc534Lg+/F@redhat.com>
  1 sibling, 1 reply; 10+ messages in thread
From: Daniel Krügler @ 2021-12-21  6:28 UTC (permalink / raw)
  To: François Dumont; +Cc: libstdc++, gcc-patches

Am Di., 21. Dez. 2021 um 07:08 Uhr schrieb François Dumont via
Libstdc++ <libstdc++@gcc.gnu.org>:
>
> Hi
>
>      Is there a chance for this patch to be integrated for next gcc
> release ?
>
> François
>

No counterargument for the acceptance, but: Shouldn't
__small_size_threshold() be a noexcept function?

- Daniel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH][Hashtable 6/6] PR 68303 small size optimization
  2021-12-21  6:28         ` Daniel Krügler
@ 2021-12-21 17:55           ` François Dumont
  2021-12-23 12:43             ` Jonathan Wakely
  0 siblings, 1 reply; 10+ messages in thread
From: François Dumont @ 2021-12-21 17:55 UTC (permalink / raw)
  To: Daniel Krügler; +Cc: libstdc++, gcc-patches

On 21/12/21 7:28 am, Daniel Krügler wrote:
> Am Di., 21. Dez. 2021 um 07:08 Uhr schrieb François Dumont via
> Libstdc++ <libstdc++@gcc.gnu.org>:
>> Hi
>>
>>       Is there a chance for this patch to be integrated for next gcc
>> release ?
>>
>> François
>>
> No counterargument for the acceptance, but: Shouldn't
> __small_size_threshold() be a noexcept function?
>
> - Daniel

Could it enhance code generation ? I could make it depends on 
_Hashtable_hash_traits<>::__small_size_threshold() noexcept 
qualification if so. But I was hoping that the compiler to detect all 
that itself.

Otherwise no, it do not have to be noexcept as it is used to avoid 
hasher invocation in some situations and hasher is not noexcept 
constraint. At least I do not need to static_assert this.



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH][Hashtable 6/6] PR 68303 small size optimization
  2021-12-21 17:55           ` François Dumont
@ 2021-12-23 12:43             ` Jonathan Wakely
  0 siblings, 0 replies; 10+ messages in thread
From: Jonathan Wakely @ 2021-12-23 12:43 UTC (permalink / raw)
  To: François Dumont; +Cc: Daniel Krügler, libstdc++, gcc-patches

On Tue, 21 Dec 2021 at 17:56, François Dumont via Libstdc++ <
libstdc++@gcc.gnu.org> wrote:

> On 21/12/21 7:28 am, Daniel Krügler wrote:
> > Am Di., 21. Dez. 2021 um 07:08 Uhr schrieb François Dumont via
> > Libstdc++ <libstdc++@gcc.gnu.org>:
> >> Hi
> >>
> >>       Is there a chance for this patch to be integrated for next gcc
> >> release ?
> >>
> >> François
> >>
> > No counterargument for the acceptance, but: Shouldn't
> > __small_size_threshold() be a noexcept function?
> >
> > - Daniel
>
> Could it enhance code generation ? I could make it depends on
> _Hashtable_hash_traits<>::__small_size_threshold() noexcept
> qualification if so. But I was hoping that the compiler to detect all
> that itself.
>
> Otherwise no, it do not have to be noexcept as it is used to avoid
> hasher invocation in some situations and hasher is not noexcept
> constraint. At least I do not need to static_assert this.
>
>
But why not make it noexcept? It just returns a constant integer. It can be
noexcept.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH][Hashtable 6/6] PR 68303 small size optimization
       [not found]         ` <YcRzoSSc534Lg+/F@redhat.com>
@ 2021-12-25 21:39           ` François Dumont
  2022-01-05 17:07             ` Jonathan Wakely
  0 siblings, 1 reply; 10+ messages in thread
From: François Dumont @ 2021-12-25 21:39 UTC (permalink / raw)
  To: Jonathan Wakely; +Cc: libstdc++, gcc-patches

[-- Attachment #1: Type: text/plain, Size: 8407 bytes --]

On 23/12/21 2:03 pm, Jonathan Wakely wrote:
> On 21/12/21 07:07 +0100, François Dumont wrote:
>> Hi
>>
>> ??? Is there a chance for this patch to be integrated for next gcc
>> release ?
>
> Yes, I think this can still make it for GCC 12 (the patch was first
> posted long ago in stage 1 and it's just me being slow to review it).
>
> But I have some questions and comments ...
>
>
>> diff --git a/libstdc++-v3/include/bits/hashtable.h 
>> b/libstdc++-v3/include/bits/hashtable.h
>> index 6e2d4c10cfe..6b2c6b99fef 100644
>> --- a/libstdc++-v3/include/bits/hashtable.h
>> +++ b/libstdc++-v3/include/bits/hashtable.h
>> @@ -419,6 +419,13 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
>>       _M_uses_single_bucket() const
>>       { return _M_uses_single_bucket(_M_buckets); }
>>
>> +      static constexpr size_t
>> +      __small_size_threshold()
>> +      {
>> +    return
>> + __detail::_Hashtable_hash_traits<_Hash>::__small_size_threshold();
>> +      }
>> +
I've added the noexcept qualification as you asked.
>>       __hashtable_alloc&
>>       _M_base_alloc() { return *this; }
>>
>> @@ -788,6 +795,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
>>       _M_bucket_index(__hash_code __c) const
>>       { return __hash_code_base::_M_bucket_index(__c, 
>> _M_bucket_count); }
>>
>> +      __node_base_ptr
>> +      _M_find_before_node(const key_type&);
>> +
>>       // Find and insert helper functions and types
>>       // Find the node before the one matching the criteria.
>>       __node_base_ptr
>> @@ -831,6 +841,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
>>       __node_base_ptr
>>       _M_get_previous_node(size_type __bkt, __node_ptr __n);
>>
>> +      pair<const_iterator, __hash_code>
>> +      _M_compute_hash_code(const_iterator __hint, const key_type& 
>> __k) const;
>> +
>>       // Insert node __n with hash code __code, in bucket __bkt if no
>>       // rehash (assumes no element with same key already present).
>>       // Takes ownership of __n if insertion succeeds, throws otherwise.
>> @@ -1126,7 +1139,6 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
>>       void _M_rehash(size_type __bkt_count, const __rehash_state& 
>> __state);
>>     };
>>
>> -
>>   // Definitions of class template _Hashtable's out-of-line member 
>> functions.
>>   template<typename _Key, typename _Value, typename _Alloc,
>>        typename _ExtractKey, typename _Equal,
>> @@ -1628,6 +1640,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
>>     find(const key_type& __k)
>>     -> iterator
>>     {
>> +      if (size() <= __small_size_threshold())
>> +    {
>> +      for (auto __it = begin(); __it != end(); ++__it)
>> +        if (this->_M_key_equals(__k, *__it._M_cur))
>> +          return __it;
>> +      return end();
>> +    }
>
> This loop is repeated a few times, would it be better factored out
> into its own function? _M_find_loop or something? The return type is
> different in some cases, so maybe it's OK like this.
Yes, I also thought about that but there is often small changes from one 
loop to another as you noticed.
>
>
>
>> +
>>       __hash_code __code = this->_M_hash_code(__k);
>>       std::size_t __bkt = _M_bucket_index(__code);
>>       return iterator(_M_find_node(__bkt, __k, __code));
>> @@ -1643,6 +1663,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
>>     find(const key_type& __k) const
>>     -> const_iterator
>>     {
>> +      if (size() <= __small_size_threshold())
>> +    {
>> +      for (auto __it = begin(); __it != end(); ++__it)
>> +        if (this->_M_key_equals(__k, *__it._M_cur))
>> +          return __it;
>> +      return end();
>> +    }
>> +
>>       __hash_code __code = this->_M_hash_code(__k);
>>       std::size_t __bkt = _M_bucket_index(__code);
>>       return const_iterator(_M_find_node(__bkt, __k, __code));
>> @@ -1855,6 +1883,35 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
>>       }
>> #endif
>>
>> +  // Find the node before the one whose key compares equal to k.
>> +  // Return nullptr if no node is found.
>> +  template<typename _Key, typename _Value, typename _Alloc,
>> +       typename _ExtractKey, typename _Equal,
>> +       typename _Hash, typename _RangeHash, typename _Unused,
>> +       typename _RehashPolicy, typename _Traits>
>> +    auto
>> +    _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
>> +           _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
>> +    _M_find_before_node(const key_type& __k)
>> +    -> __node_base_ptr
>> +    {
>> +      __node_base_ptr __prev_p = &_M_before_begin;
>
> This is OK now, but would need to be std::__addressof(_M_before_begin)
> if/when the _Hash_code_base type becomes dependent on the allocator's
> pointer.
Yes, maybe after gcc release we will talk about those fancy pointer 
types again.
>
>      __n_last = __n_last->_M_next();
>> diff --git a/libstdc++-v3/include/bits/hashtable_policy.h 
>> b/libstdc++-v3/include/bits/hashtable_policy.h
>> index 0b5443fc187..b4a8af081ce 100644
>> --- a/libstdc++-v3/include/bits/hashtable_policy.h
>> +++ b/libstdc++-v3/include/bits/hashtable_policy.h
>> @@ -246,6 +246,20 @@ namespace __detail
>>       using __unique_keys = __bool_constant<_Unique_keys>;
>>     };
>>
>> +  /**
>> +   *  struct _Hashtable_hash_traits
>> +   *
>> +   *  Important traits for hash tables depending on associated hasher.
>> +   *
>> +   */
>> +  template<typename _Hash>
>> +    struct _Hashtable_hash_traits
>> +    {
>> +      static constexpr std::size_t
>> +      __small_size_threshold()
>> +      { return std::__is_fast_hash<_Hash>::value ? 0 : 20; }
>> +    };
>
> Yet another trait that nobody is ever going to specialize make me sad.
> I don't have a better idea though.

Sure, but maybe I can document it ?

I also wonder why you did not propose to make it a constant rather than 
requesting to add the noexcept.

>
>        {
>> @@ -1263,6 +1279,14 @@ namespace __detail
>>         const _Hash_node_value<_Value, __cache_hash_code>& __n) const
>>     { return _M_hash_code(_ExtractKey{}(__n._M_v())); }
>>
>> +      __hash_code
>> +      _M_hash_code(const _Hash_node_value<_Value, false>& __n) const
>> +      { return _M_hash_code(_ExtractKey{}(__n._M_v())); }
>> +
>> +      __hash_code
>> +      _M_hash_code(const _Hash_node_value<_Value, true>& __n) const
>> +      { return __n._M_hash_code; }
>> +
>>       std::size_t
>>       _M_bucket_index(__hash_code __c, std::size_t __bkt_count) const
>>       { return _RangeHash{}(__c, __bkt_count); }
>> @@ -1273,17 +1297,14 @@ namespace __detail
>>     noexcept( noexcept(declval<const _Hash&>()(declval<const _Key&>()))
>>           && noexcept(declval<const _RangeHash&>()((__hash_code)0,
>>                                (std::size_t)0)) )
>> -      {
>> -    return _RangeHash{}(_M_hash_code(_ExtractKey{}(__n._M_v())),
>> -                __bkt_count);
>> -      }
>> +      { return _M_bucket_index(_M_hash_code(__n), __bkt_count); }
>
> Why add this extra level of indirection (and overload resolution)?
>
> We know this is a _Hash_node_value<V, false>, why call _M_hash_code to
> decide how to get the hash code for it?

Just to avoid code duplication but indeed it introduces overload 
resolution so I reverted it.

>
>
>> diff --git a/libstdc++-v3/testsuite/util/testsuite_performance.h 
>> b/libstdc++-v3/testsuite/util/testsuite_performance.h
>> index cba3a0d4b17..4ca15ab0e71 100644
>> --- a/libstdc++-v3/testsuite/util/testsuite_performance.h
>> +++ b/libstdc++-v3/testsuite/util/testsuite_performance.h
>> @@ -239,7 +239,7 @@ namespace __gnu_test
>>     out << std::setw(4) << t.real_time() << "r" << space;
>>     out << std::setw(4) << t.user_time() << "u" << space;
>>     out << std::setw(4) << t.system_time() << "s" << space;
>> -    out << std::setw(8) << r.allocated_memory() << "mem" << space;
>> +    out << std::setw(9) << r.allocated_memory() << "mem" << space;
>
> One day I need to figure out why the reported memory is garbage so
> often


Ok with those changes ?

François



[-- Attachment #2: pr68303.patch --]
[-- Type: text/x-patch, Size: 18834 bytes --]

diff --git a/libstdc++-v3/include/bits/hashtable.h b/libstdc++-v3/include/bits/hashtable.h
index 6e2d4c10cfe..c47e3b58d3f 100644
--- a/libstdc++-v3/include/bits/hashtable.h
+++ b/libstdc++-v3/include/bits/hashtable.h
@@ -419,6 +419,13 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       _M_uses_single_bucket() const
       { return _M_uses_single_bucket(_M_buckets); }
 
+      static constexpr size_t
+      __small_size_threshold() noexcept
+      {
+	return
+	  __detail::_Hashtable_hash_traits<_Hash>::__small_size_threshold();
+      }
+
       __hashtable_alloc&
       _M_base_alloc() { return *this; }
 
@@ -788,6 +795,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       _M_bucket_index(__hash_code __c) const
       { return __hash_code_base::_M_bucket_index(__c, _M_bucket_count); }
 
+      __node_base_ptr
+      _M_find_before_node(const key_type&);
+
       // Find and insert helper functions and types
       // Find the node before the one matching the criteria.
       __node_base_ptr
@@ -831,6 +841,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       __node_base_ptr
       _M_get_previous_node(size_type __bkt, __node_ptr __n);
 
+      pair<const_iterator, __hash_code>
+      _M_compute_hash_code(const_iterator __hint, const key_type& __k) const;
+
       // Insert node __n with hash code __code, in bucket __bkt if no
       // rehash (assumes no element with same key already present).
       // Takes ownership of __n if insertion succeeds, throws otherwise.
@@ -1126,7 +1139,6 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       void _M_rehash(size_type __bkt_count, const __rehash_state& __state);
     };
 
-
   // Definitions of class template _Hashtable's out-of-line member functions.
   template<typename _Key, typename _Value, typename _Alloc,
 	   typename _ExtractKey, typename _Equal,
@@ -1628,6 +1640,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     find(const key_type& __k)
     -> iterator
     {
+      if (size() <= __small_size_threshold())
+	{
+	  for (auto __it = begin(); __it != end(); ++__it)
+	    if (this->_M_key_equals(__k, *__it._M_cur))
+	      return __it;
+	  return end();
+	}
+
       __hash_code __code = this->_M_hash_code(__k);
       std::size_t __bkt = _M_bucket_index(__code);
       return iterator(_M_find_node(__bkt, __k, __code));
@@ -1643,6 +1663,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
     find(const key_type& __k) const
     -> const_iterator
     {
+      if (size() <= __small_size_threshold())
+	{
+	  for (auto __it = begin(); __it != end(); ++__it)
+	    if (this->_M_key_equals(__k, *__it._M_cur))
+	      return __it;
+	  return end();
+	}
+
       __hash_code __code = this->_M_hash_code(__k);
       std::size_t __bkt = _M_bucket_index(__code);
       return const_iterator(_M_find_node(__bkt, __k, __code));
@@ -1855,6 +1883,35 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
       }
 #endif
 
+  // Find the node before the one whose key compares equal to k.
+  // Return nullptr if no node is found.
+  template<typename _Key, typename _Value, typename _Alloc,
+	   typename _ExtractKey, typename _Equal,
+	   typename _Hash, typename _RangeHash, typename _Unused,
+	   typename _RehashPolicy, typename _Traits>
+    auto
+    _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
+	       _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
+    _M_find_before_node(const key_type& __k)
+    -> __node_base_ptr
+    {
+      __node_base_ptr __prev_p = &_M_before_begin;
+      if (!__prev_p->_M_nxt)
+	return nullptr;
+
+      for (__node_ptr __p = static_cast<__node_ptr>(__prev_p->_M_nxt);
+	   __p != nullptr;
+	   __p = __p->_M_next())
+	{
+	  if (this->_M_key_equals(__k, *__p))
+	    return __prev_p;
+
+	  __prev_p = __p;
+	}
+
+      return nullptr;
+    }
+
   // Find the node before the one whose key compares equal to k in the bucket
   // bkt. Return nullptr if no node is found.
   template<typename _Key, typename _Value, typename _Alloc,
@@ -2003,11 +2060,20 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	// First build the node to get access to the hash code
 	_Scoped_node __node { this, std::forward<_Args>(__args)...  };
 	const key_type& __k = _ExtractKey{}(__node._M_node->_M_v());
+	if (size() <= __small_size_threshold())
+	  {
+	    for (auto __it = begin(); __it != end(); ++__it)
+	      if (this->_M_key_equals(__k, *__it._M_cur))
+		// There is already an equivalent node, no insertion
+		return { __it, false };
+	  }
+
 	__hash_code __code = this->_M_hash_code(__k);
 	size_type __bkt = _M_bucket_index(__code);
+	if (size() > __small_size_threshold())
 	  if (__node_ptr __p = _M_find_node(__bkt, __k, __code))
 	    // There is already an equivalent node, no insertion
-	  return std::make_pair(iterator(__p), false);
+	    return { iterator(__p), false };
 
 	// Insert the node
 	auto __pos = _M_insert_unique_node(__bkt, __code, __node._M_node);
@@ -2031,13 +2097,41 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	_Scoped_node __node { this, std::forward<_Args>(__args)...  };
 	const key_type& __k = _ExtractKey{}(__node._M_node->_M_v());
 
-	__hash_code __code = this->_M_hash_code(__k);
+	auto __res = this->_M_compute_hash_code(__hint, __k);
 	auto __pos
-	  = _M_insert_multi_node(__hint._M_cur, __code, __node._M_node);
+	  = _M_insert_multi_node(__res.first._M_cur, __res.second,
+				 __node._M_node);
 	__node._M_node = nullptr;
 	return __pos;
       }
 
+  template<typename _Key, typename _Value, typename _Alloc,
+	   typename _ExtractKey, typename _Equal,
+	   typename _Hash, typename _RangeHash, typename _Unused,
+	   typename _RehashPolicy, typename _Traits>
+    auto
+    _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
+	       _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
+    _M_compute_hash_code(const_iterator __hint, const key_type& __k) const
+    -> pair<const_iterator, __hash_code>
+    {
+      if (size() <= __small_size_threshold())
+	{
+	  if (__hint != cend())
+	    {
+	      for (auto __it = __hint; __it != cend(); ++__it)
+		if (this->_M_key_equals(__k, *__it._M_cur))
+		  return { __it, this->_M_hash_code(*__it._M_cur) };
+	    }
+
+	  for (auto __it = cbegin(); __it != __hint; ++__it)
+	    if (this->_M_key_equals(__k, *__it._M_cur))
+	      return { __it, this->_M_hash_code(*__it._M_cur) };
+	}
+
+      return { __hint, this->_M_hash_code(__k) };
+    }
+
   template<typename _Key, typename _Value, typename _Alloc,
 	   typename _ExtractKey, typename _Equal,
 	   typename _Hash, typename _RangeHash, typename _Unused,
@@ -2136,9 +2230,15 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 		       const _NodeGenerator& __node_gen)
       -> pair<iterator, bool>
       {
+	if (size() <= __small_size_threshold())
+	  for (auto __it = begin(); __it != end(); ++__it)
+	    if (this->_M_key_equals_tr(__k, *__it._M_cur))
+	      return { __it, false };
+
 	__hash_code __code = this->_M_hash_code_tr(__k);
 	size_type __bkt = _M_bucket_index(__code);
 
+	if (size() > __small_size_threshold())
 	  if (__node_ptr __node = _M_find_node_tr(__bkt, __k, __code))
 	    return { iterator(__node), false };
 
@@ -2172,11 +2272,12 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	_Scoped_node __node{ __node_gen(std::forward<_Arg>(__v)), this };
 
 	// Second compute the hash code so that we don't rehash if it throws.
-	__hash_code __code
-	  = this->_M_hash_code(_ExtractKey{}(__node._M_node->_M_v()));
+	auto __res = this->_M_compute_hash_code(
+	  __hint, _ExtractKey{}(__node._M_node->_M_v()));
 
 	auto __pos
-	  = _M_insert_multi_node(__hint._M_cur, __code, __node._M_node);
+	  = _M_insert_multi_node(__res.first._M_cur, __res.second,
+				 __node._M_node);
 	__node._M_node = nullptr;
 	return __pos;
       }
@@ -2238,17 +2339,34 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	       _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
     _M_erase(true_type /* __uks */, const key_type& __k)
     -> size_type
+    {
+      __node_base_ptr __prev_n;
+      __node_ptr __n;
+      std::size_t __bkt;
+      if (size() <= __small_size_threshold())
+	{
+	  __prev_n = _M_find_before_node(__k);
+	  if (!__prev_n)
+	    return 0;
+
+	  // We found a matching node, erase it.
+	  __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
+	  __bkt = _M_bucket_index(*__n);
+	}
+      else
 	{
 	  __hash_code __code = this->_M_hash_code(__k);
-      std::size_t __bkt = _M_bucket_index(__code);
+	  __bkt = _M_bucket_index(__code);
 
 	  // Look for the node before the first matching node.
-      __node_base_ptr __prev_n = _M_find_before_node(__bkt, __k, __code);
+	  __prev_n = _M_find_before_node(__bkt, __k, __code);
 	  if (!__prev_n)
 	    return 0;
 
 	  // We found a matching node, erase it.
-      __node_ptr __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
+	  __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
+	}
+
       _M_erase(__bkt, __prev_n, __n);
       return 1;
     }
@@ -2262,22 +2380,39 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	       _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
     _M_erase(false_type /* __uks */, const key_type& __k)
     -> size_type
+    {
+      std::size_t __bkt;
+      __node_base_ptr __prev_n;
+      __node_ptr __n;
+      if (size() <= __small_size_threshold())
+	{
+	  __prev_n = _M_find_before_node(__k);
+	  if (!__prev_n)
+	    return 0;
+
+	  // We found a matching node, erase it.
+	  __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
+	  __bkt = _M_bucket_index(*__n);
+	}
+      else
 	{
 	  __hash_code __code = this->_M_hash_code(__k);
-      std::size_t __bkt = _M_bucket_index(__code);
+	  __bkt = _M_bucket_index(__code);
 
 	  // Look for the node before the first matching node.
-      __node_base_ptr __prev_n = _M_find_before_node(__bkt, __k, __code);
+	  __prev_n = _M_find_before_node(__bkt, __k, __code);
 	  if (!__prev_n)
 	    return 0;
 
+	  __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
+	}
+
       // _GLIBCXX_RESOLVE_LIB_DEFECTS
       // 526. Is it undefined if a function in the standard changes
       // in parameters?
       // We use one loop to find all matching nodes and another to deallocate
       // them so that the key stays valid during the first loop. It might be
       // invalidated indirectly when destroying nodes.
-      __node_ptr __n = static_cast<__node_ptr>(__prev_n->_M_nxt);
       __node_ptr __n_last = __n->_M_next();
       while (__n_last && this->_M_node_equals(*__n, *__n_last))
 	__n_last = __n_last->_M_next();
diff --git a/libstdc++-v3/include/bits/hashtable_policy.h b/libstdc++-v3/include/bits/hashtable_policy.h
index 0b5443fc187..1de90d7c46e 100644
--- a/libstdc++-v3/include/bits/hashtable_policy.h
+++ b/libstdc++-v3/include/bits/hashtable_policy.h
@@ -246,6 +246,20 @@ namespace __detail
       using __unique_keys = __bool_constant<_Unique_keys>;
     };
 
+  /**
+   *  struct _Hashtable_hash_traits
+   *
+   *  Important traits for hash tables depending on associated hasher.
+   *
+   */
+  template<typename _Hash>
+    struct _Hashtable_hash_traits
+    {
+      static constexpr std::size_t
+      __small_size_threshold() noexcept
+      { return std::__is_fast_hash<_Hash>::value ? 0 : 20; }
+    };
+
   /**
    *  struct _Hash_node_base
    *
@@ -1105,10 +1119,12 @@ namespace __detail
 			_Hash, _RangeHash, _Unused, _RehashPolicy, _Traits,
 			true_type /* Has load factor */>
     {
+    private:
       using __hashtable = _Hashtable<_Key, _Value, _Alloc, _ExtractKey,
 				     _Equal, _Hash, _RangeHash, _Unused,
 				     _RehashPolicy, _Traits>;
 
+    public:
       float
       max_load_factor() const noexcept
       {
@@ -1263,6 +1279,14 @@ namespace __detail
 		const _Hash_node_value<_Value, __cache_hash_code>& __n) const
 	{ return _M_hash_code(_ExtractKey{}(__n._M_v())); }
 
+      __hash_code
+      _M_hash_code(const _Hash_node_value<_Value, false>& __n) const
+      { return _M_hash_code(_ExtractKey{}(__n._M_v())); }
+
+      __hash_code
+      _M_hash_code(const _Hash_node_value<_Value, true>& __n) const
+      { return __n._M_hash_code; }
+
       std::size_t
       _M_bucket_index(__hash_code __c, std::size_t __bkt_count) const
       { return _RangeHash{}(__c, __bkt_count); }
@@ -1641,18 +1665,19 @@ namespace __detail
       { }
 
       bool
-      _M_equals(const _Key& __k, __hash_code __c,
-		const _Hash_node_value<_Value, __hash_cached::value>& __n) const
+      _M_key_equals(const _Key& __k,
+		    const _Hash_node_value<_Value,
+					   __hash_cached::value>& __n) const
       {
 	static_assert(__is_invocable<const _Equal&, const _Key&, const _Key&>{},
 	  "key equality predicate must be invocable with two arguments of "
 	  "key type");
-	return _S_equals(__c, __n) && _M_eq()(__k, _ExtractKey{}(__n._M_v()));
+	return _M_eq()(__k, _ExtractKey{}(__n._M_v()));
       }
 
       template<typename _Kt>
 	bool
-	_M_equals_tr(const _Kt& __k, __hash_code __c,
+	_M_key_equals_tr(const _Kt& __k,
 			 const _Hash_node_value<_Value,
 					     __hash_cached::value>& __n) const
 	{
@@ -1660,16 +1685,28 @@ namespace __detail
 	    __is_invocable<const _Equal&, const _Kt&, const _Key&>{},
 	    "key equality predicate must be invocable with two arguments of "
 	    "key type");
-	  return _S_equals(__c, __n) && _M_eq()(__k, _ExtractKey{}(__n._M_v()));
+	  return _M_eq()(__k, _ExtractKey{}(__n._M_v()));
 	}
 
+      bool
+      _M_equals(const _Key& __k, __hash_code __c,
+		const _Hash_node_value<_Value, __hash_cached::value>& __n) const
+      { return _S_equals(__c, __n) && _M_key_equals(__k, __n); }
+
+      template<typename _Kt>
+	bool
+	_M_equals_tr(const _Kt& __k, __hash_code __c,
+		     const _Hash_node_value<_Value,
+					    __hash_cached::value>& __n) const
+	{ return _S_equals(__c, __n) && _M_key_equals_tr(__k, __n); }
+
       bool
       _M_node_equals(
 	const _Hash_node_value<_Value, __hash_cached::value>& __lhn,
 	const _Hash_node_value<_Value, __hash_cached::value>& __rhn) const
       {
 	return _S_node_equals(__lhn, __rhn)
-	  && _M_eq()(_ExtractKey{}(__lhn._M_v()), _ExtractKey{}(__rhn._M_v()));
+	  && _M_key_equals(_ExtractKey{}(__lhn._M_v()), __rhn);
       }
 
       void
diff --git a/libstdc++-v3/src/c++11/hashtable_c++0x.cc b/libstdc++-v3/src/c++11/hashtable_c++0x.cc
index b9dec61b2ea..116192dfcf1 100644
--- a/libstdc++-v3/src/c++11/hashtable_c++0x.cc
+++ b/libstdc++-v3/src/c++11/hashtable_c++0x.cc
@@ -30,6 +30,7 @@
 #include <tuple>
 #include <ext/aligned_buffer.h>
 #include <ext/alloc_traits.h>
+#include <bits/functional_hash.h>
 #include <bits/hashtable_policy.h>
 
 namespace std _GLIBCXX_VISIBILITY(default)
diff --git a/libstdc++-v3/testsuite/performance/23_containers/insert_erase/unordered_small_size.cc b/libstdc++-v3/testsuite/performance/23_containers/insert_erase/unordered_small_size.cc
new file mode 100644
index 00000000000..ae63c15b5da
--- /dev/null
+++ b/libstdc++-v3/testsuite/performance/23_containers/insert_erase/unordered_small_size.cc
@@ -0,0 +1,125 @@
+// { dg-do run { target c++11 } }
+
+// Copyright (C) 2021 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// <http://www.gnu.org/licenses/>.
+
+#include <string>
+#include <sstream>
+#include <vector>
+#include <unordered_set>
+#include <testsuite_performance.h>
+
+namespace
+{
+  const int nb_elements = 20;
+  const int nb_insts = 150000;
+
+  template<typename _ElemType>
+    void bench(const char* desc, const std::vector<_ElemType>& elems)
+    {
+      using namespace __gnu_test;
+
+      time_counter time;
+      resource_counter resource;
+
+      std::vector<std::unordered_set<_ElemType>> insts(nb_insts);
+      for (int j = 0; j != nb_insts; ++j)
+	insts.emplace_back();
+
+      start_counters(time, resource);
+
+      for (auto& us : insts)
+	for (int i = 0; i != nb_elements; ++i)
+	  us.insert(elems[i]);
+
+      stop_counters(time, resource);
+
+      std::ostringstream ostr;
+      ostr << desc << " 1st insert";
+      report_performance(__FILE__, ostr.str().c_str(), time, resource);
+
+      start_counters(time, resource);
+
+      for (auto& us : insts)
+	for (int i = nb_elements - 1; i >= 0; --i)
+	  {
+	    auto it = us.find(elems[i]);
+	    if (it != us.end())
+	      us.erase(it);
+	  }
+
+      stop_counters(time, resource);
+
+      ostr.str("");
+      ostr << desc << " find/erase";
+      report_performance(__FILE__, ostr.str().c_str(), time, resource);
+
+      start_counters(time, resource);
+
+      for (auto& us : insts)
+	{
+	  us.insert(elems[0]);
+	  for (int i = nb_elements - 1; i >= 0; --i)
+	    us.insert(elems[i]);
+	}
+
+      stop_counters(time, resource);
+      ostr.str("");
+      ostr << desc << " 2nd insert";
+      report_performance(__FILE__, ostr.str().c_str(), time, resource);
+
+      start_counters(time, resource);
+
+      for (auto& us : insts)
+	for (int j = nb_elements - 1; j >= 0; --j)
+	  us.erase(elems[j]);
+
+      stop_counters(time, resource);
+      ostr.str("");
+      ostr << desc << " erase key ";
+      report_performance(__FILE__, ostr.str().c_str(), time, resource);
+    }
+}
+
+int main()
+{
+  {
+    std::vector<int> elems;
+    elems.reserve(nb_elements);
+    for (int i = 0; i != nb_elements; ++i)
+      elems.push_back(i);
+
+    bench("std::unordered_set<int>:    ", elems);
+  }
+
+  {
+    std::vector<std::string> elems;
+    {
+      elems.reserve(nb_elements);
+      for (int i = 0; i != nb_elements; ++i)
+	{
+	  std::ostringstream ostr;
+	  ostr << "string #" << i << ' ' << std::string(1000, 'a' + i);
+	  elems.push_back(ostr.str());
+	}
+    }
+
+    bench("std::unordered_set<string>: ", elems);
+  }
+
+  return 0;
+}
diff --git a/libstdc++-v3/testsuite/util/testsuite_performance.h b/libstdc++-v3/testsuite/util/testsuite_performance.h
index cba3a0d4b17..4ca15ab0e71 100644
--- a/libstdc++-v3/testsuite/util/testsuite_performance.h
+++ b/libstdc++-v3/testsuite/util/testsuite_performance.h
@@ -239,7 +239,7 @@ namespace __gnu_test
     out << std::setw(4) << t.real_time() << "r" << space;
     out << std::setw(4) << t.user_time() << "u" << space;
     out << std::setw(4) << t.system_time() << "s" << space;
-    out << std::setw(8) << r.allocated_memory() << "mem" << space;
+    out << std::setw(9) << r.allocated_memory() << "mem" << space;
     out << std::setw(4) << r.hard_page_fault() << "pf" << space;
 
     out << std::endl;

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH][Hashtable 6/6] PR 68303 small size optimization
  2021-12-25 21:39           ` François Dumont
@ 2022-01-05 17:07             ` Jonathan Wakely
  0 siblings, 0 replies; 10+ messages in thread
From: Jonathan Wakely @ 2022-01-05 17:07 UTC (permalink / raw)
  To: François Dumont; +Cc: libstdc++, gcc-patches

On Sat, 25 Dec 2021 at 21:39, François Dumont <frs.dumont@gmail.com> wrote:

> On 23/12/21 2:03 pm, Jonathan Wakely wrote:
> > On 21/12/21 07:07 +0100, François Dumont wrote:
> >> Hi
> >>
> >> ??? Is there a chance for this patch to be integrated for next gcc
> >> release ?
> >
> > Yes, I think this can still make it for GCC 12 (the patch was first
> > posted long ago in stage 1 and it's just me being slow to review it).
> >
> > But I have some questions and comments ...
> >
> >
> >> diff --git a/libstdc++-v3/include/bits/hashtable.h
> >> b/libstdc++-v3/include/bits/hashtable.h
> >> index 6e2d4c10cfe..6b2c6b99fef 100644
> >> --- a/libstdc++-v3/include/bits/hashtable.h
> >> +++ b/libstdc++-v3/include/bits/hashtable.h
> >> @@ -419,6 +419,13 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
> >>       _M_uses_single_bucket() const
> >>       { return _M_uses_single_bucket(_M_buckets); }
> >>
> >> +      static constexpr size_t
> >> +      __small_size_threshold()
> >> +      {
> >> +    return
> >> + __detail::_Hashtable_hash_traits<_Hash>::__small_size_threshold();
> >> +      }
> >> +
> I've added the noexcept qualification as you asked.
> >>       __hashtable_alloc&
> >>       _M_base_alloc() { return *this; }
> >>
> >> @@ -788,6 +795,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
> >>       _M_bucket_index(__hash_code __c) const
> >>       { return __hash_code_base::_M_bucket_index(__c,
> >> _M_bucket_count); }
> >>
> >> +      __node_base_ptr
> >> +      _M_find_before_node(const key_type&);
> >> +
> >>       // Find and insert helper functions and types
> >>       // Find the node before the one matching the criteria.
> >>       __node_base_ptr
> >> @@ -831,6 +841,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
> >>       __node_base_ptr
> >>       _M_get_previous_node(size_type __bkt, __node_ptr __n);
> >>
> >> +      pair<const_iterator, __hash_code>
> >> +      _M_compute_hash_code(const_iterator __hint, const key_type&
> >> __k) const;
> >> +
> >>       // Insert node __n with hash code __code, in bucket __bkt if no
> >>       // rehash (assumes no element with same key already present).
> >>       // Takes ownership of __n if insertion succeeds, throws otherwise.
> >> @@ -1126,7 +1139,6 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
> >>       void _M_rehash(size_type __bkt_count, const __rehash_state&
> >> __state);
> >>     };
> >>
> >> -
> >>   // Definitions of class template _Hashtable's out-of-line member
> >> functions.
> >>   template<typename _Key, typename _Value, typename _Alloc,
> >>        typename _ExtractKey, typename _Equal,
> >> @@ -1628,6 +1640,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
> >>     find(const key_type& __k)
> >>     -> iterator
> >>     {
> >> +      if (size() <= __small_size_threshold())
> >> +    {
> >> +      for (auto __it = begin(); __it != end(); ++__it)
> >> +        if (this->_M_key_equals(__k, *__it._M_cur))
> >> +          return __it;
> >> +      return end();
> >> +    }
> >
> > This loop is repeated a few times, would it be better factored out
> > into its own function? _M_find_loop or something? The return type is
> > different in some cases, so maybe it's OK like this.
> Yes, I also thought about that but there is often small changes from one
> loop to another as you noticed.
> >
> >
> >
> >> +
> >>       __hash_code __code = this->_M_hash_code(__k);
> >>       std::size_t __bkt = _M_bucket_index(__code);
> >>       return iterator(_M_find_node(__bkt, __k, __code));
> >> @@ -1643,6 +1663,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
> >>     find(const key_type& __k) const
> >>     -> const_iterator
> >>     {
> >> +      if (size() <= __small_size_threshold())
> >> +    {
> >> +      for (auto __it = begin(); __it != end(); ++__it)
> >> +        if (this->_M_key_equals(__k, *__it._M_cur))
> >> +          return __it;
> >> +      return end();
> >> +    }
> >> +
> >>       __hash_code __code = this->_M_hash_code(__k);
> >>       std::size_t __bkt = _M_bucket_index(__code);
> >>       return const_iterator(_M_find_node(__bkt, __k, __code));
> >> @@ -1855,6 +1883,35 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
> >>       }
> >> #endif
> >>
> >> +  // Find the node before the one whose key compares equal to k.
> >> +  // Return nullptr if no node is found.
> >> +  template<typename _Key, typename _Value, typename _Alloc,
> >> +       typename _ExtractKey, typename _Equal,
> >> +       typename _Hash, typename _RangeHash, typename _Unused,
> >> +       typename _RehashPolicy, typename _Traits>
> >> +    auto
> >> +    _Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
> >> +           _Hash, _RangeHash, _Unused, _RehashPolicy, _Traits>::
> >> +    _M_find_before_node(const key_type& __k)
> >> +    -> __node_base_ptr
> >> +    {
> >> +      __node_base_ptr __prev_p = &_M_before_begin;
> >
> > This is OK now, but would need to be std::__addressof(_M_before_begin)
> > if/when the _Hash_code_base type becomes dependent on the allocator's
> > pointer.
> Yes, maybe after gcc release we will talk about those fancy pointer
> types again.
> >
> >      __n_last = __n_last->_M_next();
> >> diff --git a/libstdc++-v3/include/bits/hashtable_policy.h
> >> b/libstdc++-v3/include/bits/hashtable_policy.h
> >> index 0b5443fc187..b4a8af081ce 100644
> >> --- a/libstdc++-v3/include/bits/hashtable_policy.h
> >> +++ b/libstdc++-v3/include/bits/hashtable_policy.h
> >> @@ -246,6 +246,20 @@ namespace __detail
> >>       using __unique_keys = __bool_constant<_Unique_keys>;
> >>     };
> >>
> >> +  /**
> >> +   *  struct _Hashtable_hash_traits
> >> +   *
> >> +   *  Important traits for hash tables depending on associated hasher.
> >> +   *
> >> +   */
> >> +  template<typename _Hash>
> >> +    struct _Hashtable_hash_traits
> >> +    {
> >> +      static constexpr std::size_t
> >> +      __small_size_threshold()
> >> +      { return std::__is_fast_hash<_Hash>::value ? 0 : 20; }
> >> +    };
> >
> > Yet another trait that nobody is ever going to specialize make me sad.
> > I don't have a better idea though.
>
> Sure, but maybe I can document it ?
>
> I also wonder why you did not propose to make it a constant rather than
> requesting to add the noexcept.
>
> >
> >        {
> >> @@ -1263,6 +1279,14 @@ namespace __detail
> >>         const _Hash_node_value<_Value, __cache_hash_code>& __n) const
> >>     { return _M_hash_code(_ExtractKey{}(__n._M_v())); }
> >>
> >> +      __hash_code
> >> +      _M_hash_code(const _Hash_node_value<_Value, false>& __n) const
> >> +      { return _M_hash_code(_ExtractKey{}(__n._M_v())); }
> >> +
> >> +      __hash_code
> >> +      _M_hash_code(const _Hash_node_value<_Value, true>& __n) const
> >> +      { return __n._M_hash_code; }
> >> +
> >>       std::size_t
> >>       _M_bucket_index(__hash_code __c, std::size_t __bkt_count) const
> >>       { return _RangeHash{}(__c, __bkt_count); }
> >> @@ -1273,17 +1297,14 @@ namespace __detail
> >>     noexcept( noexcept(declval<const _Hash&>()(declval<const _Key&>()))
> >>           && noexcept(declval<const _RangeHash&>()((__hash_code)0,
> >>                                (std::size_t)0)) )
> >> -      {
> >> -    return _RangeHash{}(_M_hash_code(_ExtractKey{}(__n._M_v())),
> >> -                __bkt_count);
> >> -      }
> >> +      { return _M_bucket_index(_M_hash_code(__n), __bkt_count); }
> >
> > Why add this extra level of indirection (and overload resolution)?
> >
> > We know this is a _Hash_node_value<V, false>, why call _M_hash_code to
> > decide how to get the hash code for it?
>
> Just to avoid code duplication but indeed it introduces overload
> resolution so I reverted it.
>
> >
> >
> >> diff --git a/libstdc++-v3/testsuite/util/testsuite_performance.h
> >> b/libstdc++-v3/testsuite/util/testsuite_performance.h
> >> index cba3a0d4b17..4ca15ab0e71 100644
> >> --- a/libstdc++-v3/testsuite/util/testsuite_performance.h
> >> +++ b/libstdc++-v3/testsuite/util/testsuite_performance.h
> >> @@ -239,7 +239,7 @@ namespace __gnu_test
> >>     out << std::setw(4) << t.real_time() << "r" << space;
> >>     out << std::setw(4) << t.user_time() << "u" << space;
> >>     out << std::setw(4) << t.system_time() << "s" << space;
> >> -    out << std::setw(8) << r.allocated_memory() << "mem" << space;
> >> +    out << std::setw(9) << r.allocated_memory() << "mem" << space;
> >
> > One day I need to figure out why the reported memory is garbage so
> > often
>
>
> Ok with those changes ?
>
>
Yes, thanks very much (and bonne année).

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-01-05 17:08 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-17 21:31 [PATCH][Hashtable 6/6] PR 68303 small size optimization François Dumont
2020-07-17 12:58 ` Jonathan Wakely
2021-08-16 19:03   ` François Dumont
2021-09-23  4:36     ` François Dumont
2021-12-21  6:07       ` François Dumont
2021-12-21  6:28         ` Daniel Krügler
2021-12-21 17:55           ` François Dumont
2021-12-23 12:43             ` Jonathan Wakely
     [not found]         ` <YcRzoSSc534Lg+/F@redhat.com>
2021-12-25 21:39           ` François Dumont
2022-01-05 17:07             ` Jonathan Wakely

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).