public inbox for libstdc++-cvs@sourceware.org
help / color / mirror / Atom feed
* [gcc r11-4581] libstdc++: Use double for unordered container load factors [PR 96958]
@ 2020-10-30 21:20 Jonathan Wakely
  0 siblings, 0 replies; only message in thread
From: Jonathan Wakely @ 2020-10-30 21:20 UTC (permalink / raw)
  To: gcc-cvs, libstdc++-cvs

https://gcc.gnu.org/g:a1343e5c74093124d7fbce6052d838f47a8eeb20

commit r11-4581-ga1343e5c74093124d7fbce6052d838f47a8eeb20
Author: Jonathan Wakely <jwakely@redhat.com>
Date:   Fri Oct 30 15:14:33 2020 +0000

    libstdc++: Use double for unordered container load factors [PR 96958]
    
    These calculations were changed to use long double nearly ten years ago
    in order to get more precision than float:
    https://gcc.gnu.org/pipermail/libstdc++/2011-September/036420.html
    
    However, double should be sufficient, whlie being potentially faster
    than long double, and not requiring soft FP calculations for targets
    without native long double support.
    
    libstdc++-v3/ChangeLog:
    
            PR libstdc++/96958
            * include/bits/hashtable_policy.h (_Prime_rehash_policy)
            (_Power2_rehash_policy): Use double instead of long double.

Diff:
---
 libstdc++-v3/include/bits/hashtable_policy.h | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/libstdc++-v3/include/bits/hashtable_policy.h b/libstdc++-v3/include/bits/hashtable_policy.h
index cea5e549d25..7fed87f1c76 100644
--- a/libstdc++-v3/include/bits/hashtable_policy.h
+++ b/libstdc++-v3/include/bits/hashtable_policy.h
@@ -458,7 +458,7 @@ namespace __detail
     // Return a bucket count appropriate for n elements
     std::size_t
     _M_bkt_for_elements(std::size_t __n) const
-    { return __builtin_ceill(__n / (long double)_M_max_load_factor); }
+    { return __builtin_ceill(__n / (double)_M_max_load_factor); }
 
     // __n_bkt is current bucket count, __n_elt is current element count,
     // and __n_ins is number of elements to be inserted.  Do we need to
@@ -559,7 +559,7 @@ namespace __detail
 	_M_next_resize = size_t(-1);
       else
 	_M_next_resize
-	  = __builtin_floorl(__res * (long double)_M_max_load_factor);
+	  = __builtin_floorl(__res * (double)_M_max_load_factor);
 
       return __res;
     }
@@ -567,7 +567,7 @@ namespace __detail
     // Return a bucket count appropriate for n elements
     std::size_t
     _M_bkt_for_elements(std::size_t __n) const noexcept
-    { return __builtin_ceill(__n / (long double)_M_max_load_factor); }
+    { return __builtin_ceill(__n / (double)_M_max_load_factor); }
 
     // __n_bkt is current bucket count, __n_elt is current element count,
     // and __n_ins is number of elements to be inserted.  Do we need to
@@ -582,16 +582,16 @@ namespace __detail
 	  // If _M_next_resize is 0 it means that we have nothing allocated so
 	  // far and that we start inserting elements. In this case we start
 	  // with an initial bucket size of 11.
-	  long double __min_bkts
+	  double __min_bkts
 	    = std::max<std::size_t>(__n_elt + __n_ins, _M_next_resize ? 0 : 11)
-	      / (long double)_M_max_load_factor;
+	      / (double)_M_max_load_factor;
 	  if (__min_bkts >= __n_bkt)
 	    return { true,
 	      _M_next_bkt(std::max<std::size_t>(__builtin_floorl(__min_bkts) + 1,
 						__n_bkt * _S_growth_factor)) };
 
 	  _M_next_resize
-	    = __builtin_floorl(__n_bkt * (long double)_M_max_load_factor);
+	    = __builtin_floorl(__n_bkt * (double)_M_max_load_factor);
 	  return { false, 0 };
 	}
       else


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2020-10-30 21:20 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-30 21:20 [gcc r11-4581] libstdc++: Use double for unordered container load factors [PR 96958] Jonathan Wakely

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).