public inbox for gdb-cvs@sourceware.org
help / color / mirror / Atom feed
* [binutils-gdb] Improve performance of the H8 simulator
@ 2023-12-10 20:26 Jeff Law
  0 siblings, 0 replies; only message in thread
From: Jeff Law @ 2023-12-10 20:26 UTC (permalink / raw)
  To: gdb-cvs

https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=76c51bed59920115074e658fb2cfa76c68eb76ed

commit 76c51bed59920115074e658fb2cfa76c68eb76ed
Author: Jeff Law <jeffreyalaw@gmail.com>
Date:   Sun Dec 10 13:24:59 2023 -0700

    Improve performance of the H8 simulator
    
    Running the H8 port through the GCC testsuite currently takes 4h 30m on my
    fastest server -- that's roughly 1.5hrs per multilib tested and many tests are
    disabled for various reasons.
    
    To put that 1.5hr/multilib in perspective, that's roughly 3X the time for other
    embedded targets.  Clearly something isn't working as well as it should.
    
    A bit of digging with perf shows that we're spending a crazy amount of time
    decoding instructions in the H8 simulator.  It's not hard to see why --
    basically we take a blob of instruction data, then try to match it to every
    instruction in the H8 opcode table starting at the beginning.  That table has
    ~8000 entries (each different addressing mode is considered a different
    instruction in the table).
    
    Naturally my first thought was to sort the table and use a binary search to
    find the right entry.  That's made excessively complex due to the encoding on
    the H8.  Just getting the sort right would be much more complex than I'd
    consider advisable.
    
    Another thought was to build a mapping to the right entry for all the
    instructions that can be disambiguated based on the first nibble (4 bits) of
    instruction data and a mapping for those which can be disambiguated based on
    the first byte of instruction data.
    
    That seemed feasible until I realized that the H8/SX did some truly horrid
    things with encoding branches in the 0x4XYY opcode space.  It uses an "always
    zero" bit in the offset to encode new semantic information.  So we can't select
    on just 0x4X.  Ugh!
    
    We could always to a custom decoder.  I've done several through the years, they
    can be very fast.  But no way I can justify the time to do that.
    
    So what I settled on was to first sort the opcode table by the first nibble,
    then find the index of the first instruction for each nibble. Decoding uses
    that index to start its search.  This cuts the overall build/test by more than
    half.
    
    Next I adjusted the sort so that instructions that are not available on the
    current sub architecture are put at the end of the table.   This shaves another
    ~15% off the total cycle time.
    
    The net of the two changes is on my fastest server we've gone from 4:30 to 1:40
    running the GCC testsuite.  Same test results before/after, of course.  It's
    still not fast, but it's a hell of a lot better.

Diff:
---
 sim/h8300/compile.c | 98 +++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 96 insertions(+), 2 deletions(-)

diff --git a/sim/h8300/compile.c b/sim/h8300/compile.c
index 96254ea916d..51ad66df681 100644
--- a/sim/h8300/compile.c
+++ b/sim/h8300/compile.c
@@ -44,6 +44,11 @@
 
 int debug;
 
+/* Each entry in this array is an index into the main opcode
+   array for the first instruction starting with the given
+   4 bit nibble.  */
+static int nib_indices[16];
+
 static int memory_size;
 
 #define X(op, size)  (op * 4 + size)
@@ -388,14 +393,21 @@ decode (SIM_DESC sd, sim_cpu *cpu, int addr, unsigned char *data, decoded_inst *
   int reg[3]   = {0, 0, 0};
   int rdisp[3] = {0, 0, 0};
   int opnum;
+  int index;
   const struct h8_opcode *q;
 
   dst->dst.type = -1;
   dst->src.type = -1;
   dst->op3.type = -1;
 
-  /* Find the exact opcode/arg combo.  */
-  for (q = h8_opcodes; q->name; q++)
+  /* We speed up instruction decoding by caching an index into
+     the main opcode array for the first instruction with the
+     given 4 bit nibble.  */
+  index = nib_indices[(data[0] & 0xf0) >> 4];
+
+  /* Find the exact opcode/arg combo, starting with the precomputed
+     index.  Note this loop is performance sensitive.  */
+  for (q = &h8_opcodes[index]; q->name; q++)
     {
       const op_type *nib = q->data.nib;
       unsigned int len = 0;
@@ -1557,6 +1569,85 @@ store2 (SIM_DESC sd, ea_type *arg, int n)
   return store_1 (sd, arg, n, 1);
 }
 
+/* Callback for qsort.  We sort first based on availablity
+   (available instructions sort lower).  When availability state
+   is the same, then we use the first 4 bit nibble as a secondary
+   sort key.
+
+   We don't really care about 100% stability here, just that the
+   available instructions come first and all instrutions with
+   the same starting nibble are consecutive.
+
+   We could do even better by recording frequency information into the
+   main table and using that to sort within a nibble's group with the
+   highest frequency instructions appearing first.  */
+
+static int
+instruction_comparator (const void *p1_, const void *p2_)
+{
+  struct h8_opcode *p1 = (struct h8_opcode *)p1_;
+  struct h8_opcode *p2 = (struct h8_opcode *)p2_;
+
+  /* The 1st sort key is based on whether or not the
+     instruction is even available.  This reduces the
+     number of entries we have to look at in the common
+     case.  */
+  bool p1_available = !((p1->available == AV_H8SX && !h8300sxmode)
+			|| (p1->available == AV_H8S  && !h8300smode)
+			|| (p1->available == AV_H8H  && !h8300hmode));
+
+  bool p2_available = !((p2->available == AV_H8SX && !h8300sxmode)
+			|| (p2->available == AV_H8S  && !h8300smode)
+			|| (p2->available == AV_H8H  && !h8300hmode));
+
+  /* Sort so that available instructions come before unavailable
+     instructions.  */
+  if (p1_available != p2_available)
+    return p2_available - p1_available;
+
+  /* Secondarily sort based on the first opcode nibble.  */
+  return p1->data.nib[0] - p2->data.nib[0];
+}
+
+
+/* OPS is the opcode array, which is initially sorted by mnenomic.
+
+   Sort the array so that the instructions for the sub-architecture
+   are at the start and unavailable instructions are at the end.
+
+   Within the set of available instructions, further sort them based
+   on the first 4 bit nibble.
+
+   Then find the first index into OPS for each of the 16 possible
+   nibbles and record that into NIB_INDICES to speed up decoding.  */
+
+static void
+sort_opcodes_and_setup_nibble_indices (struct h8_opcode *ops)
+{
+  const struct h8_opcode *q;
+  int *indices;
+  int i;
+
+  /* First sort the OPS array.  */
+  for (i = 0, q = ops; q->name; q++, i++)
+    ;
+  qsort (ops, i, sizeof (struct h8_opcode), instruction_comparator);
+
+  /* Now walk the array caching the index of the first
+     occurrence of each 4 bit nibble.  */
+  memset (nib_indices, -1, sizeof (nib_indices));
+  for (i = 0, q = ops; q->name; q++, i++)
+    {
+      int nib = q->data.nib[0];
+
+      /* Record the location of the first entry with the right
+	 nibble count.  */
+      if (nib_indices[nib] == -1)
+	nib_indices[nib] = i;
+    }
+}
+
+
 /* Flag to be set whenever a new SIM_DESC object is created.  */
 static int init_pointers_needed = 1;
 
@@ -1639,6 +1730,9 @@ init_pointers (SIM_DESC sd)
 	  h8_set_reg (cpu, i, 0);
 	}
 
+      /* Sort the opcode table and create indices to speed up decode.  */
+      sort_opcodes_and_setup_nibble_indices (ops);
+
       init_pointers_needed = 0;
     }
 }

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2023-12-10 20:26 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-10 20:26 [binutils-gdb] Improve performance of the H8 simulator Jeff Law

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).