public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH] Improve RTL CSE hash table hash usage
@ 2023-02-03 14:06 Richard Biener
  2023-02-03 14:19 ` Richard Sandiford
  2023-05-03 10:22 ` Richard Biener
  0 siblings, 2 replies; 6+ messages in thread
From: Richard Biener @ 2023-02-03 14:06 UTC (permalink / raw)
  To: gcc-patches

The RTL CSE hash table has a fixed number of buckets (32) each
with a linked list of entries with the same hash value.  The
actual hash values are computed using hash_rtx which uses adds
for mixing and adds the rtx CODE as CODE << 7 (apart from some
exceptions such as MEM).  The unsigned int typed hash value
is then simply truncated for the actual lookup into the fixed
size table which means that usually CODE is simply lost.

The following improves this truncation by first mixing in more
bits using xor.  It does not change the actual hash function
since that's used outside of CSE as well.

An alternative would be to bump the fixed number of buckets,
say to 256 which would retain the LSB of CODE or to 8192 which
can capture all 6 bits required for the last CODE.

As the comment in CSE says, there's invalidate_memory and
flush_hash_table done possibly frequently and those at least
need to walk all slots, so when the hash table is mostly empty
enlarging it will be a loss.  Still there should be more
regular lookups by hash, so less collisions should pay off
as well.

Without enlarging the table a better hash function is unlikely
going to make a big difference, simple statistics on the
number of collisions at insertion time shows a reduction of
around 10%.  Bumping HASH_SHIFT by 1 improves that to 30%
at the expense of reducing the average table fill by 10%
(all of this stats from looking just at fold-const.i at -O2).
Increasing HASH_SHIFT more leaves the table even more sparse
likely showing that hash_rtx uses add for mixing which is
quite bad.  Bumping HASH_SHIFT by 2 removes 90% of all
collisions.

Experimenting with using inchash instead of adds for the
mixing does not improve things when looking at the HASH_SHIFT
bumped by 2 numbers.

Bootstrapped and tested on x86_64-unknown-linux-gnu.

Any opinions?

	* cse.cc (HASH): Turn into inline function and mix
	in another HASH_SHIFT bits.
	(SAFE_HASH): Likewise.
---
 gcc/cse.cc | 37 +++++++++++++++++++++++--------------
 1 file changed, 23 insertions(+), 14 deletions(-)

diff --git a/gcc/cse.cc b/gcc/cse.cc
index 37afc88b439..4777e559b86 100644
--- a/gcc/cse.cc
+++ b/gcc/cse.cc
@@ -420,20 +420,6 @@ struct table_elt
 #define HASH_SIZE	(1 << HASH_SHIFT)
 #define HASH_MASK	(HASH_SIZE - 1)
 
-/* Compute hash code of X in mode M.  Special-case case where X is a pseudo
-   register (hard registers may require `do_not_record' to be set).  */
-
-#define HASH(X, M)	\
- ((REG_P (X) && REGNO (X) >= FIRST_PSEUDO_REGISTER	\
-  ? (((unsigned) REG << 7) + (unsigned) REG_QTY (REGNO (X)))	\
-  : canon_hash (X, M)) & HASH_MASK)
-
-/* Like HASH, but without side-effects.  */
-#define SAFE_HASH(X, M)	\
- ((REG_P (X) && REGNO (X) >= FIRST_PSEUDO_REGISTER	\
-  ? (((unsigned) REG << 7) + (unsigned) REG_QTY (REGNO (X)))	\
-  : safe_hash (X, M)) & HASH_MASK)
-
 /* Determine whether register number N is considered a fixed register for the
    purpose of approximating register costs.
    It is desirable to replace other regs with fixed regs, to reduce need for
@@ -586,6 +572,29 @@ static machine_mode cse_cc_succs (basic_block, basic_block, rtx, rtx,
 
 static const struct rtl_hooks cse_rtl_hooks = RTL_HOOKS_INITIALIZER;
 \f
+/* Compute hash code of X in mode M.  Special-case case where X is a pseudo
+   register (hard registers may require `do_not_record' to be set).  */
+
+static inline unsigned
+HASH (rtx x, machine_mode mode)
+{
+  unsigned h = (REG_P (x) && REGNO (x) >= FIRST_PSEUDO_REGISTER
+		? (((unsigned) REG << 7) + (unsigned) REG_QTY (REGNO (x)))
+		: canon_hash (x, mode));
+  return (h ^ (h >> HASH_SHIFT)) & HASH_MASK;
+}
+
+/* Like HASH, but without side-effects.  */
+
+static inline unsigned
+SAFE_HASH (rtx x, machine_mode mode)
+{
+  unsigned h = (REG_P (x) && REGNO (x) >= FIRST_PSEUDO_REGISTER
+		? (((unsigned) REG << 7) + (unsigned) REG_QTY (REGNO (x)))
+		: safe_hash (x, mode));
+  return (h ^ (h >> HASH_SHIFT)) & HASH_MASK;
+}
+
 /* Nonzero if X has the form (PLUS frame-pointer integer).  */
 
 static bool
-- 
2.35.3

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2023-05-03 10:22 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-03 14:06 [PATCH] Improve RTL CSE hash table hash usage Richard Biener
2023-02-03 14:19 ` Richard Sandiford
2023-02-03 15:44   ` Richard Biener
2023-02-03 15:54     ` Richard Sandiford
2023-02-03 19:05       ` Richard Biener
2023-05-03 10:22 ` Richard Biener

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).