public inbox for gdb-patches@sourceware.org
 help / color / mirror / Atom feed
* [PATCH 1/5] btrace: fix gap indication
  2016-07-22  8:12 [PATCH 0/5] improve trace gap handling Markus Metzger
  2016-07-22  8:12 ` [PATCH 5/5] btrace: bridge gaps Markus Metzger
@ 2016-07-22  8:12 ` Markus Metzger
  2016-07-22  8:12 ` [PATCH 3/5] btrace: update tail call heuristic Markus Metzger
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Markus Metzger @ 2016-07-22  8:12 UTC (permalink / raw)
  To: gdb-patches; +Cc: palves

Trace gaps due to overflows or non-contiguous trace are ignored in the 'info
record' command.  Fix that.

Also add a warning when decoding the trace and print the instruction number
preceding the trace gap in that warning message.  It looks like this:

    (gdb) info record
    Active record target: record-btrace
    Recording format: Intel Processor Trace.
    Buffer size: 16kB.
    warning: Decode error (-13) at instruction 101044 (offset = 0x29f0, pc = 0x7ffff728a642): no memory mapped at this address.
    Recorded 101044 instructions in 2093 functions (1 gaps) for thread 1 (process 5360).
    (gdb) record instruction-history 101044
    101044     0x00007ffff728a640:  pop    %r13
    [decode error (-13): no memory mapped at this address]

Remove the dead code that was supposed to print a gaps warning at the end of
trace decode.  This isn't really needed since we now print a warning for each
gap.

2016-07-22  Markus Metzger  <markus.t.metzger@intel.com>

gdb/
	* btrace.c (ftrace_add_pt): Fix gap indication.  Add warning for non-
	contiguous trace and overflow.  Rephrase trace decode warning and print
	instruction number.  Remove dead gaps warning.
	(btrace_compute_ftrace_bts): Rephrase warnings and print instruction
	number.
---
 gdb/btrace.c | 54 +++++++++++++++++++++++++++++++++++-------------------
 1 file changed, 35 insertions(+), 19 deletions(-)

diff --git a/gdb/btrace.c b/gdb/btrace.c
index f2cb750..5376cfa 100644
--- a/gdb/btrace.c
+++ b/gdb/btrace.c
@@ -629,11 +629,12 @@ btrace_compute_ftrace_bts (struct thread_info *tp,
 		 beginning.  */
 	      if (begin != NULL)
 		{
-		  warning (_("Recorded trace may be corrupted around %s."),
-			   core_addr_to_string_nz (pc));
-
 		  end = ftrace_new_gap (end, BDE_BTS_OVERFLOW);
 		  ngaps += 1;
+
+		  warning (_("Recorded trace may be corrupted at instruction "
+			     "%u (pc = %s)."), end->insn_offset - 1,
+			   core_addr_to_string_nz (pc));
 		}
 	      break;
 	    }
@@ -671,14 +672,15 @@ btrace_compute_ftrace_bts (struct thread_info *tp,
 	  /* We can't continue if we fail to compute the size.  */
 	  if (size <= 0)
 	    {
-	      warning (_("Recorded trace may be incomplete around %s."),
-		       core_addr_to_string_nz (pc));
-
 	      /* Indicate the gap in the trace.  We just added INSN so we're
 		 not at the beginning.  */
 	      end = ftrace_new_gap (end, BDE_BTS_INSN_SIZE);
 	      ngaps += 1;
 
+	      warning (_("Recorded trace may be incomplete at instruction %u "
+			 "(pc = %s)."), end->insn_offset - 1,
+		       core_addr_to_string_nz (pc));
+
 	      break;
 	    }
 
@@ -749,11 +751,10 @@ ftrace_add_pt (struct pt_insn_decoder *decoder,
 {
   struct btrace_function *begin, *end, *upd;
   uint64_t offset;
-  int errcode, nerrors;
+  int errcode;
 
   begin = *pbegin;
   end = *pend;
-  nerrors = 0;
   for (;;)
     {
       struct btrace_insn btinsn;
@@ -784,11 +785,29 @@ ftrace_add_pt (struct pt_insn_decoder *decoder,
 		 flag.  The ENABLED instruction flag means that we continued
 		 from some other instruction.  Indicate this as a trace gap.  */
 	      if (insn.enabled)
-		*pend = end = ftrace_new_gap (end, BDE_PT_DISABLED);
+		{
+		  *pend = end = ftrace_new_gap (end, BDE_PT_DISABLED);
+		  *ngaps += 1;
+
+		  pt_insn_get_offset (decoder, &offset);
+
+		  warning (_("Non-contiguous trace at instruction %u (offset "
+			     "= 0x%" PRIx64 ", pc = 0x%" PRIx64 ")."),
+			   end->insn_offset - 1, offset, insn.ip);
+		}
 
 	      /* Indicate trace overflows.  */
 	      if (insn.resynced)
-		*pend = end = ftrace_new_gap (end, BDE_PT_OVERFLOW);
+		{
+		  *pend = end = ftrace_new_gap (end, BDE_PT_OVERFLOW);
+		  *ngaps += 1;
+
+		  pt_insn_get_offset (decoder, &offset);
+
+		  warning (_("Overflow at instruction %u (offset = 0x%" PRIx64
+			     ", pc = 0x%" PRIx64 ")."), end->insn_offset - 1,
+			   offset, insn.ip);
+		}
 	    }
 
 	  upd = ftrace_update_function (end, insn.ip);
@@ -819,19 +838,16 @@ ftrace_add_pt (struct pt_insn_decoder *decoder,
       if (begin == NULL)
 	continue;
 
-      pt_insn_get_offset (decoder, &offset);
-
-      warning (_("Failed to decode Intel Processor Trace near trace "
-		 "offset 0x%" PRIx64 " near recorded PC 0x%" PRIx64 ": %s."),
-	       offset, insn.ip, pt_errstr (pt_errcode (errcode)));
-
       /* Indicate the gap in the trace.  */
       *pend = end = ftrace_new_gap (end, errcode);
       *ngaps += 1;
-    }
 
-  if (nerrors > 0)
-    warning (_("The recorded execution trace may have gaps."));
+      pt_insn_get_offset (decoder, &offset);
+
+      warning (_("Decode error (%d) at instruction %u (offset = 0x%" PRIx64
+		 ", pc = 0x%" PRIx64 "): %s."), errcode, end->insn_offset - 1,
+	       offset, insn.ip, pt_errstr (pt_errcode (errcode)));
+    }
 }
 
 /* A callback function to allow the trace decoder to read the inferior's
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 0/5] improve trace gap handling
@ 2016-07-22  8:12 Markus Metzger
  2016-07-22  8:12 ` [PATCH 5/5] btrace: bridge gaps Markus Metzger
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: Markus Metzger @ 2016-07-22  8:12 UTC (permalink / raw)
  To: gdb-patches; +Cc: palves

A small patch series that improves dealing with gaps in recorded trace.

Markus Metzger (5):
  btrace: fix gap indication
  btrace: allow leading trace gaps
  btrace: update tail call heuristic
  btrace: preserve function level for unexpected returns
  btrace: bridge gaps

 gdb/btrace.c        | 531 ++++++++++++++++++++++++++++++++++++++++++++++------
 gdb/record-btrace.c |  33 +++-
 2 files changed, 503 insertions(+), 61 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 2/5] btrace: allow leading trace gaps
  2016-07-22  8:12 [PATCH 0/5] improve trace gap handling Markus Metzger
                   ` (2 preceding siblings ...)
  2016-07-22  8:12 ` [PATCH 3/5] btrace: update tail call heuristic Markus Metzger
@ 2016-07-22  8:12 ` Markus Metzger
  2016-07-22  8:12 ` [PATCH 4/5] btrace: preserve function level for unexpected returns Markus Metzger
  2016-10-27 10:59 ` [PATCH 0/5] improve trace gap handling Yao Qi
  5 siblings, 0 replies; 11+ messages in thread
From: Markus Metzger @ 2016-07-22  8:12 UTC (permalink / raw)
  To: gdb-patches; +Cc: palves

GDB ignores trace gaps from decode errors or overflows at the beginning of the
trace.  There isn't really a gap in the trace; the trace just starts a bit
later than expected.

In cases where there is no trace at all or where the trace is smaller than
expected, this may hide the reason for the missing trace.

Allow leading trace gaps.  They will be shown as decode warnings and by the
record function-call-history command.

    (gdb) info record
    Active record target: record-btrace
    Recording format: Intel Processor Trace.
    Buffer size: 16kB.
    warning: Decode error (-6) at instruction 0 (offset = 0x58, pc = 0x0): unexpected packet context.
    warning: Decode error (-6) at instruction 0 (offset = 0xb0, pc = 0x0): unexpected packet context.
    warning: Decode error (-6) at instruction 0 (offset = 0x168, pc = 0x0): unexpected packet context.
    warning: Decode error (-6) at instruction 54205 (offset = 0xe08, pc = 0x0): unexpected packet context.
    warning: Decode error (-6) at instruction 54205 (offset = 0xe60, pc = 0x0): unexpected packet context.
    warning: Decode error (-6) at instruction 54205 (offset = 0xed8, pc = 0x0): unexpected packet context.
    Recorded 91582 instructions in 1111 functions (6 gaps) for thread 1 (process 15710).
    (gdb) record function-call-history /c 1
    1       [decode error (-6): unexpected packet context]
    2       [decode error (-6): unexpected packet context]
    3       [decode error (-6): unexpected packet context]
    4           _dl_addr
    5             ??
    6           _dl_addr
    7         ??
    8           ??
    9         ??
    10      ??

Leading trace gaps will not be shown by the record instruction-history command
without further changes.

2016-07-22  Markus Metzger  <markus.t.metzger@intel.com>

gdb/
	* btrace.c (btrace_compute_ftrace_bts, ftrace_add_pt): Allow leading gaps.
	* record-btrace.c (record_btrace_single_step_forward)
	(record_btrace_single_step_backward): Jump back to last instruction if
	step ends at a gap.
	(record_btrace_goto_begin): Skip gaps.
---
 gdb/btrace.c        | 50 +++++++++++++++++++++++++-------------------------
 gdb/record-btrace.c | 33 +++++++++++++++++++++++++++------
 2 files changed, 52 insertions(+), 31 deletions(-)

diff --git a/gdb/btrace.c b/gdb/btrace.c
index 5376cfa..e0d0f27 100644
--- a/gdb/btrace.c
+++ b/gdb/btrace.c
@@ -625,17 +625,17 @@ btrace_compute_ftrace_bts (struct thread_info *tp,
 	  /* We should hit the end of the block.  Warn if we went too far.  */
 	  if (block->end < pc)
 	    {
-	      /* Indicate the gap in the trace - unless we're at the
-		 beginning.  */
-	      if (begin != NULL)
-		{
-		  end = ftrace_new_gap (end, BDE_BTS_OVERFLOW);
-		  ngaps += 1;
+	      /* Indicate the gap in the trace.  */
+	      end = ftrace_new_gap (end, BDE_BTS_OVERFLOW);
+	      if (begin == NULL)
+		begin = end;
+
+	      ngaps += 1;
+
+	      warning (_("Recorded trace may be corrupted at instruction "
+			 "%u (pc = %s)."), end->insn_offset - 1,
+		       core_addr_to_string_nz (pc));
 
-		  warning (_("Recorded trace may be corrupted at instruction "
-			     "%u (pc = %s)."), end->insn_offset - 1,
-			   core_addr_to_string_nz (pc));
-		}
 	      break;
 	    }
 
@@ -795,19 +795,22 @@ ftrace_add_pt (struct pt_insn_decoder *decoder,
 			     "= 0x%" PRIx64 ", pc = 0x%" PRIx64 ")."),
 			   end->insn_offset - 1, offset, insn.ip);
 		}
+	    }
 
-	      /* Indicate trace overflows.  */
-	      if (insn.resynced)
-		{
-		  *pend = end = ftrace_new_gap (end, BDE_PT_OVERFLOW);
-		  *ngaps += 1;
+	  /* Indicate trace overflows.  */
+	  if (insn.resynced)
+	    {
+	      *pend = end = ftrace_new_gap (end, BDE_PT_OVERFLOW);
+	      if (begin == NULL)
+		*pbegin = begin = end;
 
-		  pt_insn_get_offset (decoder, &offset);
+	      *ngaps += 1;
 
-		  warning (_("Overflow at instruction %u (offset = 0x%" PRIx64
-			     ", pc = 0x%" PRIx64 ")."), end->insn_offset - 1,
-			   offset, insn.ip);
-		}
+	      pt_insn_get_offset (decoder, &offset);
+
+	      warning (_("Overflow at instruction %u (offset = 0x%" PRIx64
+			 ", pc = 0x%" PRIx64 ")."), end->insn_offset - 1,
+		       offset, insn.ip);
 	    }
 
 	  upd = ftrace_update_function (end, insn.ip);
@@ -833,13 +836,10 @@ ftrace_add_pt (struct pt_insn_decoder *decoder,
       if (errcode == -pte_eos)
 	break;
 
-      /* If the gap is at the very beginning, we ignore it - we will have
-	 less trace, but we won't have any holes in the trace.  */
-      if (begin == NULL)
-	continue;
-
       /* Indicate the gap in the trace.  */
       *pend = end = ftrace_new_gap (end, errcode);
+      if (begin == NULL)
+	*pbegin = begin = end;
       *ngaps += 1;
 
       pt_insn_get_offset (decoder, &offset);
diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
index 24594a9..b3318cd 100644
--- a/gdb/record-btrace.c
+++ b/gdb/record-btrace.c
@@ -2262,7 +2262,7 @@ record_btrace_replay_at_breakpoint (struct thread_info *tp)
 static struct target_waitstatus
 record_btrace_single_step_forward (struct thread_info *tp)
 {
-  struct btrace_insn_iterator *replay, end;
+  struct btrace_insn_iterator *replay, end, start;
   struct btrace_thread_info *btinfo;
 
   btinfo = &tp->btrace;
@@ -2276,7 +2276,9 @@ record_btrace_single_step_forward (struct thread_info *tp)
   if (record_btrace_replay_at_breakpoint (tp))
     return btrace_step_stopped ();
 
-  /* Skip gaps during replay.  */
+  /* Skip gaps during replay.  If we end up at a gap (at the end of the trace),
+     jump back to the instruction at which we started.  */
+  start = *replay;
   do
     {
       unsigned int steps;
@@ -2285,7 +2287,10 @@ record_btrace_single_step_forward (struct thread_info *tp)
 	 of the execution history.  */
       steps = btrace_insn_next (replay, 1);
       if (steps == 0)
-	return btrace_step_no_history ();
+	{
+	  *replay = start;
+	  return btrace_step_no_history ();
+	}
     }
   while (btrace_insn_get (replay) == NULL);
 
@@ -2306,7 +2311,7 @@ record_btrace_single_step_forward (struct thread_info *tp)
 static struct target_waitstatus
 record_btrace_single_step_backward (struct thread_info *tp)
 {
-  struct btrace_insn_iterator *replay;
+  struct btrace_insn_iterator *replay, start;
   struct btrace_thread_info *btinfo;
 
   btinfo = &tp->btrace;
@@ -2317,14 +2322,19 @@ record_btrace_single_step_backward (struct thread_info *tp)
     replay = record_btrace_start_replaying (tp);
 
   /* If we can't step any further, we reached the end of the history.
-     Skip gaps during replay.  */
+     Skip gaps during replay.  If we end up at a gap (at the beginning of
+     the trace), jump back to the instruction at which we started.  */
+  start = *replay;
   do
     {
       unsigned int steps;
 
       steps = btrace_insn_prev (replay, 1);
       if (steps == 0)
-	return btrace_step_no_history ();
+	{
+	  *replay = start;
+	  return btrace_step_no_history ();
+	}
     }
   while (btrace_insn_get (replay) == NULL);
 
@@ -2734,6 +2744,17 @@ record_btrace_goto_begin (struct target_ops *self)
   tp = require_btrace_thread ();
 
   btrace_insn_begin (&begin, &tp->btrace);
+
+  /* Skip gaps at the beginning of the trace.  */
+  while (btrace_insn_get (&begin) == NULL)
+    {
+      unsigned int steps;
+
+      steps = btrace_insn_next (&begin, 1);
+      if (steps == 0)
+	error (_("No trace."));
+    }
+
   record_btrace_set_replay (tp, &begin);
 }
 
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 5/5] btrace: bridge gaps
  2016-07-22  8:12 [PATCH 0/5] improve trace gap handling Markus Metzger
@ 2016-07-22  8:12 ` Markus Metzger
  2016-07-22  8:12 ` [PATCH 1/5] btrace: fix gap indication Markus Metzger
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Markus Metzger @ 2016-07-22  8:12 UTC (permalink / raw)
  To: gdb-patches; +Cc: palves

Most of the time, the trace should be in one piece.  This case is handled fine
by GDB.  In some cases, however, there may be gaps in the trace.  They result
from trace decode errors or from overflows.

A gap in the trace means we lost an unknown amount of trace.  Gaps can be very
small, such as a few instructions in the same function, or they can be rather
big.  We may, for example, lose a few function calls or returns.  The trace may
continue in a different function and we likely don't know how we got there.

Even though we can't say how the program executed across a gap, higher levels
may not be impacted too much by it.  Let's assume we have functions a-e and a
trace that looks roughly like this:

  a
   \
    b                    b
     \                  /
      c   <gap>        c
                      /
                 d   d
                  \ /
                   e

Even though we can't say for sure, it is likely that b and c are the same
function instance before and after the gap.  This patch is trying to connect
the c and b function segments across the gap.

This will add a to the back trace of b on the right hand side.  The changes are
reflected in GDB's internal representation of the trace and will improve:

  - the output of "record function-call-history /c"
  - the output of "backtrace" in replay mode
  - source stepping in replay mode
    will be improved indirectly via the improved back trace

I don't have an automated test for this patch; decode errors will be fixed and
overflows occur sporadically and are quite rare.  I tested it by hacking GDB to
provoke a decode error and on the expected gap in the gdb.btrace/dlopen.exp
test.

The issue is that we can't predict where we will be able to re-sync in case of
errors.  For the expected decode error in gdb.btrace/dlopen.exp, for example, we
may be able to re-sync somewhere in dlclose, in test, in main, or not at all.

Here's one example run of gdb.btrace/dlopen.exp with and without this patch.

    (gdb) info record
    Active record target: record-btrace
    Recording format: Intel Processor Trace.
    Buffer size: 16kB.
    warning: Non-contiguous trace at instruction 66608 (offset = 0xa83, pc = 0xb7fdcc31).
    warning: Non-contiguous trace at instruction 66652 (offset = 0xa9b, pc = 0xb7fdcc31).
    warning: Non-contiguous trace at instruction 66770 (offset = 0xacb, pc = 0xb7fdcc31).
    warning: Non-contiguous trace at instruction 66966 (offset = 0xb60, pc = 0xb7ff5ee4).
    warning: Non-contiguous trace at instruction 66994 (offset = 0xb74, pc = 0xb7ff5f24).
    warning: Non-contiguous trace at instruction 67334 (offset = 0xbac, pc = 0xb7ff5e6d).
    warning: Non-contiguous trace at instruction 69022 (offset = 0xc04, pc = 0xb7ff60b3).
    warning: Non-contiguous trace at instruction 69116 (offset = 0xc1c, pc = 0xb7ff60b3).
    warning: Non-contiguous trace at instruction 69504 (offset = 0xc74, pc = 0xb7ff605d).
    warning: Non-contiguous trace at instruction 83648 (offset = 0xecc, pc = 0xb7ff6134).
    warning: Decode error (-13) at instruction 83876 (offset = 0xf48, pc = 0xb7fd6380): no memory mapped at this address.
    warning: Non-contiguous trace at instruction 83876 (offset = 0x11b7, pc = 0xb7ff1c70).
    Recorded 83948 instructions in 912 functions (12 gaps) for thread 1 (process 12996).
    (gdb) record instruction-history 83876, +2
    83876   => 0xb7fec46f <call_init.part.0+95>:    call   *%eax
    [decode error (-13): no memory mapped at this address]
    [disabled]
    83877      0xb7ff1c70 <_dl_close_worker.part.0+1584>:   nop

Without the patch, the trace is disconnected and the backtrace is short:

    (gdb) record goto 83876
    #0  0xb7fec46f in call_init.part () from /lib/ld-linux.so.2
    (gdb) backtrace
    #0  0xb7fec46f in call_init.part () from /lib/ld-linux.so.2
    #1  0xb7fec5d0 in _dl_init () from /lib/ld-linux.so.2
    #2  0xb7ff0fe3 in dl_open_worker () from /lib/ld-linux.so.2
    Backtrace stopped: not enough registers or memory available to unwind further
    (gdb) record goto 83877
    #0  0xb7ff1c70 in _dl_close_worker.part.0 () from /lib/ld-linux.so.2
    (gdb) backtrace
    #0  0xb7ff1c70 in _dl_close_worker.part.0 () from /lib/ld-linux.so.2
    #1  0xb7ff287a in _dl_close () from /lib/ld-linux.so.2
    #2  0xb7fc3d5d in dlclose_doit () from /lib/libdl.so.2
    #3  0xb7fec354 in _dl_catch_error () from /lib/ld-linux.so.2
    #4  0xb7fc43dd in _dlerror_run () from /lib/libdl.so.2
    #5  0xb7fc3d98 in dlclose () from /lib/libdl.so.2
    #6  0x0804860a in test ()
    #7  0x08048628 in main ()

With the patch, GDB is able to connect the trace pieces and we get a full
backtrace.

    (gdb) record goto 83876
    #0  0xb7fec46f in call_init.part () from /lib/ld-linux.so.2
    (gdb) backtrace
    #0  0xb7fec46f in call_init.part () from /lib/ld-linux.so.2
    #1  0xb7fec5d0 in _dl_init () from /lib/ld-linux.so.2
    #2  0xb7ff0fe3 in dl_open_worker () from /lib/ld-linux.so.2
    #3  0xb7fec354 in _dl_catch_error () from /lib/ld-linux.so.2
    #4  0xb7ff02e2 in _dl_open () from /lib/ld-linux.so.2
    #5  0xb7fc3c65 in dlopen_doit () from /lib/libdl.so.2
    #6  0xb7fec354 in _dl_catch_error () from /lib/ld-linux.so.2
    #7  0xb7fc43dd in _dlerror_run () from /lib/libdl.so.2
    #8  0xb7fc3d0e in dlopen@@GLIBC_2.1 () from /lib/libdl.so.2
    #9  0xb7ff28ee in _dl_runtime_resolve () from /lib/ld-linux.so.2
    #10 0x0804841c in ?? ()
    #11 0x08048470 in dlopen@plt ()
    #12 0x080485a3 in test ()
    #13 0x08048628 in main ()
    (gdb) record goto 83877
    #0  0xb7ff1c70 in _dl_close_worker.part.0 () from /lib/ld-linux.so.2
    (gdb) backtrace
    #0  0xb7ff1c70 in _dl_close_worker.part.0 () from /lib/ld-linux.so.2
    #1  0xb7ff287a in _dl_close () from /lib/ld-linux.so.2
    #2  0xb7fc3d5d in dlclose_doit () from /lib/libdl.so.2
    #3  0xb7fec354 in _dl_catch_error () from /lib/ld-linux.so.2
    #4  0xb7fc43dd in _dlerror_run () from /lib/libdl.so.2
    #5  0xb7fc3d98 in dlclose () from /lib/libdl.so.2
    #6  0x0804860a in test ()
    #7  0x08048628 in main ()

It worked nicely in this case but it may, of course, also lead to weird
connections; it is a heuristic, after all.

It works best when the gap is small and the trace pieces are long.

2016-07-22  Markus Metzger  <markus.t.metzger@intel.com>

gdb/
	* btrace.c (bfun_s): New typedef.
	(ftrace_update_caller): Print caller in debug dump.
	(ftrace_get_caller, ftrace_match_backtrace, ftrace_fixup_level)
	(ftrace_compute_global_level_offset, ftrace_connect_bfun)
	(ftrace_connect_backtrace, ftrace_bridge_gap, btrace_bridge_gaps): New.
	(btrace_compute_ftrace_bts): Pass vector of gaps.  Collect gaps.
	(btrace_compute_ftrace_pt): Likewise.
	(btrace_compute_ftrace): Split into this, ...
	(btrace_compute_ftrace_1): ... this, and ...
	(btrace_finalize_ftrace): ... this.  Call btrace_bridge_gaps.
---
 gdb/btrace.c | 430 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 412 insertions(+), 18 deletions(-)

diff --git a/gdb/btrace.c b/gdb/btrace.c
index 32036cf..bf6afdd 100644
--- a/gdb/btrace.c
+++ b/gdb/btrace.c
@@ -48,6 +48,10 @@ static struct cmd_list_element *maint_btrace_pt_show_cmdlist;
 /* Control whether to skip PAD packets when computing the packet history.  */
 static int maint_btrace_pt_skip_pad = 1;
 
+/* A vector of function segments.  */
+typedef struct btrace_function * bfun_s;
+DEF_VEC_P (bfun_s);
+
 static void btrace_add_pc (struct thread_info *tp);
 
 /* Print a record debug message.  Use do ... while (0) to avoid ambiguities
@@ -233,6 +237,7 @@ ftrace_update_caller (struct btrace_function *bfun,
   bfun->flags = flags;
 
   ftrace_debug (bfun, "set caller");
+  ftrace_debug (caller, "..to");
 }
 
 /* Fix up the caller for all segments of a function.  */
@@ -295,6 +300,18 @@ ftrace_new_tailcall (struct btrace_function *caller,
   return bfun;
 }
 
+/* Return the caller of BFUN or NULL if there is none.  This function skips
+   tail calls in the call chain.  */
+static struct btrace_function *
+ftrace_get_caller (struct btrace_function *bfun)
+{
+  for (; bfun != NULL; bfun = bfun->up)
+    if ((bfun->flags & BFUN_UP_LINKS_TO_TAILCALL) == 0)
+      return bfun->up;
+
+  return NULL;
+}
+
 /* Find the innermost caller in the back trace of BFUN with MFUN/FUN
    symbol information.  */
 
@@ -598,23 +615,359 @@ ftrace_classify_insn (struct gdbarch *gdbarch, CORE_ADDR pc)
   return iclass;
 }
 
+/* Try to match the back trace at LHS to the back trace at RHS.  Returns the
+   number of matching function segments or zero if the back traces do not
+   match.  */
+
+static int
+ftrace_match_backtrace (struct btrace_function *lhs,
+			struct btrace_function *rhs)
+{
+  int matches;
+
+  for (matches = 0; lhs != NULL && rhs != NULL; ++matches)
+    {
+      if (ftrace_function_switched (lhs, rhs->msym, rhs->sym))
+	return 0;
+
+      lhs = ftrace_get_caller (lhs);
+      rhs = ftrace_get_caller (rhs);
+    }
+
+  return matches;
+}
+
+/* Add ADJUSTMENT to the level of BFUN and succeeding function segments.  */
+
+static void
+ftrace_fixup_level (struct btrace_function *bfun, int adjustment)
+{
+  if (adjustment == 0)
+    return;
+
+  DEBUG_FTRACE ("fixup level (%+d)", adjustment);
+  ftrace_debug (bfun, "..bfun");
+
+  for (; bfun != NULL; bfun = bfun->flow.next)
+    bfun->level += adjustment;
+}
+
+/* Recompute the global level offset.  Traverse the function trace and compute
+   the global level offset as the negative of the minimal function level.  */
+
+static void
+ftrace_compute_global_level_offset (struct btrace_thread_info *btinfo)
+{
+  struct btrace_function *bfun, *end;
+  int level;
+
+  if (btinfo == NULL)
+    return;
+
+  bfun = btinfo->begin;
+  if (bfun == NULL)
+    return;
+
+  /* The last function segment contains the current instruction, which is not
+     really part of the trace.  If it contains just this one instruction, we
+     stop when we reach it; otherwise, we let the below loop run to the end.  */
+  end = btinfo->end;
+  if (VEC_length (btrace_insn_s, end->insn) > 1)
+    end = NULL;
+
+  level = INT_MAX;
+  for (; bfun != end; bfun = bfun->flow.next)
+    level = min (level, bfun->level);
+
+  DEBUG_FTRACE ("setting global level offset: %d", -level);
+  btinfo->level = -level;
+}
+
+/* Connect the function segments PREV and NEXT in a bottom-to-top walk as in
+   ftrace_connect_backtrace.  */
+
+static void
+ftrace_connect_bfun (struct btrace_function *prev,
+		     struct btrace_function *next)
+{
+  DEBUG_FTRACE ("connecting...");
+  ftrace_debug (prev, "..prev");
+  ftrace_debug (next, "..next");
+
+  /* The function segments are not yet connected.  */
+  gdb_assert (prev->segment.next == NULL);
+  gdb_assert (next->segment.prev == NULL);
+
+  prev->segment.next = next;
+  next->segment.prev = prev;
+
+  /* We may have moved NEXT to a different function level.  */
+  ftrace_fixup_level (next, prev->level - next->level);
+
+  /* If we run out of back trace for one, let's use the other's.  */
+  if (prev->up == NULL)
+    {
+      if (next->up != NULL)
+	{
+	  DEBUG_FTRACE ("using next's callers");
+	  ftrace_fixup_caller (prev, next->up, next->flags);
+	}
+    }
+  else if (next->up == NULL)
+    {
+      if (prev->up != NULL)
+	{
+	  DEBUG_FTRACE ("using prev's callers");
+	  ftrace_fixup_caller (next, prev->up, prev->flags);
+	}
+    }
+  else
+    {
+      /* PREV may have a tailcall caller, NEXT can't.  If it does, fixup the up
+	 link to add the tail callers to NEXT's back trace.
+
+	 This removes NEXT->UP from NEXT's back trace.  It will be added back
+	 when connecting NEXT and PREV's callers - provided they exist.
+
+	 If PREV's back trace consists of a series of tail calls without an
+	 actual call, there will be no further connection and NEXT's caller will
+	 be removed for good.  To catch this case, we handle it here and connect
+	 the top of PREV's back trace to NEXT's caller.  */
+      if ((prev->flags & BFUN_UP_LINKS_TO_TAILCALL) != 0)
+	{
+	  struct btrace_function *caller;
+	  btrace_function_flags flags;
+
+	  /* We checked NEXT->UP above so CALLER can't be NULL.  */
+	  caller = next->up;
+	  flags = next->flags;
+
+	  DEBUG_FTRACE ("adding prev's tail calls to next");
+
+	  ftrace_fixup_caller (next, prev->up, prev->flags);
+
+	  for (prev = prev->up; prev != NULL; prev = prev->up)
+	    {
+	      /* At the end of PREV's back trace, continue with CALLER.  */
+	      if (prev->up == NULL)
+		{
+		  DEBUG_FTRACE ("fixing up link for tailcall chain");
+		  ftrace_debug (prev, "..top");
+		  ftrace_debug (caller, "..up");
+
+		  ftrace_fixup_caller (prev, caller, flags);
+
+		  /* If we skipped any tail calls, this may move CALLER to a
+		     different function level.
+
+		     Note that changing CALLER's level is only OK because we
+		     know that this is the last iteration of the bottom-to-top
+		     walk in ftrace_connect_backtrace.
+
+		     Otherwise we will fix up CALLER's level when we connect it
+		     to PREV's caller in the next iteration.  */
+		  ftrace_fixup_level (caller, prev->level - caller->level - 1);
+		  break;
+		}
+
+	      /* There's nothing to do if we find a real call.  */
+	      if ((prev->flags & BFUN_UP_LINKS_TO_TAILCALL) == 0)
+		{
+		  DEBUG_FTRACE ("will fix up link in next iteration");
+		  break;
+		}
+	    }
+	}
+    }
+}
+
+/* Connect function segments on the same level in the back trace at LHS and RHS.
+   The back traces at LHS and RHS are expected to match according to
+   ftrace_match_backtrace.  */
+
+static void
+ftrace_connect_backtrace (struct btrace_function *lhs,
+			  struct btrace_function *rhs)
+{
+  while (lhs != NULL && rhs != NULL)
+    {
+      struct btrace_function *prev, *next;
+
+      gdb_assert (!ftrace_function_switched (lhs, rhs->msym, rhs->sym));
+
+      /* Connecting LHS and RHS may change the up link.  */
+      prev = lhs;
+      next = rhs;
+
+      lhs = ftrace_get_caller (lhs);
+      rhs = ftrace_get_caller (rhs);
+
+      ftrace_connect_bfun (prev, next);
+    }
+}
+
+/* Bridge the gap between two function segments left and right of a gap if their
+   respective back traces match in at least MIN_MATCHES functions.
+
+   Returns non-zero if the gap could be bridged, zero otherwise.  */
+
+static int
+ftrace_bridge_gap (struct btrace_function *lhs, struct btrace_function *rhs,
+		   int min_matches)
+{
+  struct btrace_function *best_l, *best_r, *cand_l, *cand_r;
+  int best_matches;
+
+  DEBUG_FTRACE ("checking gap at insn %u (req matches: %d)",
+		rhs->insn_offset - 1, min_matches);
+
+  best_matches = 0;
+  best_l = NULL;
+  best_r = NULL;
+
+  /* We search the back traces of LHS and RHS for valid connections and connect
+     the two functon segments that give the longest combined back trace.  */
+
+  for (cand_l = lhs; cand_l != NULL; cand_l = ftrace_get_caller (cand_l))
+    for (cand_r = rhs; cand_r != NULL; cand_r = ftrace_get_caller (cand_r))
+      {
+	int matches;
+
+	matches = ftrace_match_backtrace (cand_l, cand_r);
+	if (best_matches < matches)
+	  {
+	    best_matches = matches;
+	    best_l = cand_l;
+	    best_r = cand_r;
+	  }
+      }
+
+  /* We need at least MIN_MATCHES matches.  */
+  gdb_assert (min_matches > 0);
+  if (best_matches < min_matches)
+    return 0;
+
+  DEBUG_FTRACE ("..matches: %d", best_matches);
+
+  /* We will fix up the level of BEST_R and succeeding function segments such
+     that BEST_R's level matches BEST_L's when we connect BEST_L to BEST_R.
+
+     This will ignore the level of RHS and following if BEST_R != RHS.  I.e. if
+     BEST_R is a successor of RHS in the back trace of RHS (phases 1 and 3).
+
+     To catch this, we already fix up the level here where we can start at RHS
+     instead of at BEST_R.  We will ignore the level fixup when connecting
+     BEST_L to BEST_R as they will already be on the same level.  */
+  ftrace_fixup_level (rhs, best_l->level - best_r->level);
+
+  ftrace_connect_backtrace (best_l, best_r);
+
+  return best_matches;
+}
+
+/* Try to bridge gaps due to overflow or decode errors by connecting the
+   function segments that are separated by the gap.  */
+
+static void
+btrace_bridge_gaps (struct thread_info *tp, VEC (bfun_s) **gaps)
+{
+  VEC (bfun_s) *remaining;
+  struct cleanup *old_chain;
+  int min_matches;
+
+  DEBUG ("bridge gaps");
+
+  remaining = NULL;
+  old_chain = make_cleanup (VEC_cleanup (bfun_s), &remaining);
+
+  /* We require a minimum amount of matches for bridging a gap.  The number of
+     required matches will be lowered with each iteration.
+
+     The more matches the higher our confidence that the bridging is correct.
+     For big gaps or small traces, however, it may not be feasible to require a
+     high number of matches.  */
+  for (min_matches = 5; min_matches > 0; --min_matches)
+    {
+      /* Let's try to bridge as many gaps as we can.  In some cases, we need to
+	 skip a gap and revisit it again after we closed later gaps.  */
+      while (!VEC_empty (bfun_s, *gaps))
+	{
+	  struct btrace_function *gap;
+	  unsigned int idx;
+
+	  for (idx = 0; VEC_iterate (bfun_s, *gaps, idx, gap); ++idx)
+	    {
+	      struct btrace_function *lhs, *rhs;
+	      int bridged;
+
+	      /* We may have a sequence of gaps if we run from one error into
+		 the next as we try to re-sync onto the trace stream.  Ignore
+		 all but the leftmost gap in such a sequence.
+
+		 Also ignore gaps at the beginning of the trace.  */
+	      lhs = gap->flow.prev;
+	      if (lhs == NULL || lhs->errcode != 0)
+		continue;
+
+	      /* Skip gaps to the right.  */
+	      for (rhs = gap->flow.next; rhs != NULL; rhs = rhs->flow.next)
+		if (rhs->errcode == 0)
+		  break;
+
+	      /* Ignore gaps at the end of the trace.  */
+	      if (rhs == NULL)
+		continue;
+
+	      bridged = ftrace_bridge_gap (lhs, rhs, min_matches);
+
+	      /* Keep track of gaps we were not able to bridge and try again.
+		 If we just pushed them to the end of GAPS we would risk an
+		 infinite loop in case we simply cannot bridge a gap.  */
+	      if (bridged == 0)
+		VEC_safe_push (bfun_s, remaining, gap);
+	    }
+
+	  /* Let's see if we made any progress.  */
+	  if (VEC_length (bfun_s, remaining) == VEC_length (bfun_s, *gaps))
+	    break;
+
+	  VEC_free (bfun_s, *gaps);
+
+	  *gaps = remaining;
+	  remaining = NULL;
+	}
+
+      /* We get here if either GAPS is empty or if GAPS equals REMAINING.  */
+      if (VEC_empty (bfun_s, *gaps))
+	break;
+
+      VEC_free (bfun_s, remaining);
+    }
+
+  do_cleanups (old_chain);
+
+  /* We may omit this in some cases.  Not sure it is worth the extra
+     complication, though.  */
+  ftrace_compute_global_level_offset (&tp->btrace);
+}
+
 /* Compute the function branch trace from BTS trace.  */
 
 static void
 btrace_compute_ftrace_bts (struct thread_info *tp,
-			   const struct btrace_data_bts *btrace)
+			   const struct btrace_data_bts *btrace,
+			   VEC (bfun_s) **gaps)
 {
   struct btrace_thread_info *btinfo;
   struct btrace_function *begin, *end;
   struct gdbarch *gdbarch;
-  unsigned int blk, ngaps;
+  unsigned int blk;
   int level;
 
   gdbarch = target_gdbarch ();
   btinfo = &tp->btrace;
   begin = btinfo->begin;
   end = btinfo->end;
-  ngaps = btinfo->ngaps;
   level = begin != NULL ? -btinfo->level : INT_MAX;
   blk = VEC_length (btrace_block_s, btrace->blocks);
 
@@ -641,7 +994,7 @@ btrace_compute_ftrace_bts (struct thread_info *tp,
 	      if (begin == NULL)
 		begin = end;
 
-	      ngaps += 1;
+	      VEC_safe_push (bfun_s, *gaps, end);
 
 	      warning (_("Recorded trace may be corrupted at instruction "
 			 "%u (pc = %s)."), end->insn_offset - 1,
@@ -686,7 +1039,8 @@ btrace_compute_ftrace_bts (struct thread_info *tp,
 	      /* Indicate the gap in the trace.  We just added INSN so we're
 		 not at the beginning.  */
 	      end = ftrace_new_gap (end, BDE_BTS_INSN_SIZE);
-	      ngaps += 1;
+
+	      VEC_safe_push (bfun_s, *gaps, end);
 
 	      warning (_("Recorded trace may be incomplete at instruction %u "
 			 "(pc = %s)."), end->insn_offset - 1,
@@ -710,7 +1064,6 @@ btrace_compute_ftrace_bts (struct thread_info *tp,
 
   btinfo->begin = begin;
   btinfo->end = end;
-  btinfo->ngaps = ngaps;
 
   /* LEVEL is the minimal function level of all btrace function segments.
      Define the global level offset to -LEVEL so all function levels are
@@ -758,7 +1111,7 @@ static void
 ftrace_add_pt (struct pt_insn_decoder *decoder,
 	       struct btrace_function **pbegin,
 	       struct btrace_function **pend, int *plevel,
-	       unsigned int *ngaps)
+	       VEC (bfun_s) **gaps)
 {
   struct btrace_function *begin, *end, *upd;
   uint64_t offset;
@@ -798,7 +1151,8 @@ ftrace_add_pt (struct pt_insn_decoder *decoder,
 	      if (insn.enabled)
 		{
 		  *pend = end = ftrace_new_gap (end, BDE_PT_DISABLED);
-		  *ngaps += 1;
+
+		  VEC_safe_push (bfun_s, *gaps, end);
 
 		  pt_insn_get_offset (decoder, &offset);
 
@@ -815,7 +1169,7 @@ ftrace_add_pt (struct pt_insn_decoder *decoder,
 	      if (begin == NULL)
 		*pbegin = begin = end;
 
-	      *ngaps += 1;
+	      VEC_safe_push (bfun_s, *gaps, end);
 
 	      pt_insn_get_offset (decoder, &offset);
 
@@ -851,7 +1205,8 @@ ftrace_add_pt (struct pt_insn_decoder *decoder,
       *pend = end = ftrace_new_gap (end, errcode);
       if (begin == NULL)
 	*pbegin = begin = end;
-      *ngaps += 1;
+
+      VEC_safe_push (bfun_s, *gaps, end);
 
       pt_insn_get_offset (decoder, &offset);
 
@@ -926,7 +1281,8 @@ static void btrace_finalize_ftrace_pt (struct pt_insn_decoder *decoder,
 
 static void
 btrace_compute_ftrace_pt (struct thread_info *tp,
-			  const struct btrace_data_pt *btrace)
+			  const struct btrace_data_pt *btrace,
+			  VEC (bfun_s) **gaps)
 {
   struct btrace_thread_info *btinfo;
   struct pt_insn_decoder *decoder;
@@ -970,8 +1326,7 @@ btrace_compute_ftrace_pt (struct thread_info *tp,
 	error (_("Failed to configure the Intel Processor Trace decoder: "
 		 "%s."), pt_errstr (pt_errcode (errcode)));
 
-      ftrace_add_pt (decoder, &btinfo->begin, &btinfo->end, &level,
-		     &btinfo->ngaps);
+      ftrace_add_pt (decoder, &btinfo->begin, &btinfo->end, &level, gaps);
     }
   CATCH (error, RETURN_MASK_ALL)
     {
@@ -979,7 +1334,8 @@ btrace_compute_ftrace_pt (struct thread_info *tp,
       if (error.reason == RETURN_QUIT && btinfo->end != NULL)
 	{
 	  btinfo->end = ftrace_new_gap (btinfo->end, BDE_PT_USER_QUIT);
-	  btinfo->ngaps++;
+
+	  VEC_safe_push (bfun_s, *gaps, btinfo->end);
 	}
 
       btrace_finalize_ftrace_pt (decoder, tp, level);
@@ -995,7 +1351,8 @@ btrace_compute_ftrace_pt (struct thread_info *tp,
 
 static void
 btrace_compute_ftrace_pt (struct thread_info *tp,
-			  const struct btrace_data_pt *btrace)
+			  const struct btrace_data_pt *btrace,
+			  VEC (bfun_s) **gaps)
 {
   internal_error (__FILE__, __LINE__, _("Unexpected branch trace format."));
 }
@@ -1006,7 +1363,8 @@ btrace_compute_ftrace_pt (struct thread_info *tp,
    a thread given by BTINFO.  */
 
 static void
-btrace_compute_ftrace (struct thread_info *tp, struct btrace_data *btrace)
+btrace_compute_ftrace_1 (struct thread_info *tp, struct btrace_data *btrace,
+			 VEC (bfun_s) **gaps)
 {
   DEBUG ("compute ftrace");
 
@@ -1016,17 +1374,53 @@ btrace_compute_ftrace (struct thread_info *tp, struct btrace_data *btrace)
       return;
 
     case BTRACE_FORMAT_BTS:
-      btrace_compute_ftrace_bts (tp, &btrace->variant.bts);
+      btrace_compute_ftrace_bts (tp, &btrace->variant.bts, gaps);
       return;
 
     case BTRACE_FORMAT_PT:
-      btrace_compute_ftrace_pt (tp, &btrace->variant.pt);
+      btrace_compute_ftrace_pt (tp, &btrace->variant.pt, gaps);
       return;
     }
 
   internal_error (__FILE__, __LINE__, _("Unkown branch trace format."));
 }
 
+static void
+btrace_finalize_ftrace (struct thread_info *tp, VEC (bfun_s) **gaps)
+{
+  if (!VEC_empty (bfun_s, *gaps))
+    {
+      tp->btrace.ngaps += VEC_length (bfun_s, *gaps);
+      btrace_bridge_gaps (tp, gaps);
+    }
+}
+
+static void
+btrace_compute_ftrace (struct thread_info *tp, struct btrace_data *btrace)
+{
+  VEC (bfun_s) *gaps;
+  struct cleanup *old_chain;
+
+  gaps = NULL;
+  old_chain = make_cleanup (VEC_cleanup (bfun_s), &gaps);
+
+  TRY
+    {
+      btrace_compute_ftrace_1 (tp, btrace, &gaps);
+    }
+  CATCH (error, RETURN_MASK_ALL)
+    {
+      btrace_finalize_ftrace (tp, &gaps);
+
+      throw_exception (error);
+    }
+  END_CATCH
+
+  btrace_finalize_ftrace (tp, &gaps);
+
+  do_cleanups (old_chain);
+}
+
 /* Add an entry for the current PC.  */
 
 static void
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 3/5] btrace: update tail call heuristic
  2016-07-22  8:12 [PATCH 0/5] improve trace gap handling Markus Metzger
  2016-07-22  8:12 ` [PATCH 5/5] btrace: bridge gaps Markus Metzger
  2016-07-22  8:12 ` [PATCH 1/5] btrace: fix gap indication Markus Metzger
@ 2016-07-22  8:12 ` Markus Metzger
  2016-07-22  8:12 ` [PATCH 2/5] btrace: allow leading trace gaps Markus Metzger
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Markus Metzger @ 2016-07-22  8:12 UTC (permalink / raw)
  To: gdb-patches; +Cc: palves

An unconditional jump to the start of a function typically indicates a tail
call.

If we can't determine the start of the function at the destination address, we
used to treat it as a tail call, as well.  This results in lots of tail calls
for code for which we don't have symbol information.

Restrict the heuristic to only consider jumps as tail calls that switch
functions in the case where we can't determine the start of a function.  This
effectively disables tail call detection for code without symbol information.

2016-07-22  Markus Metzger  <markus.t.metzger@intel.com>

gdb/
	* btrace.c (ftrace_update_function): Update tail call heuristic.
---
 gdb/btrace.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/gdb/btrace.c b/gdb/btrace.c
index e0d0f27..e817f09 100644
--- a/gdb/btrace.c
+++ b/gdb/btrace.c
@@ -528,10 +528,17 @@ ftrace_update_function (struct btrace_function *bfun, CORE_ADDR pc)
 
 	    start = get_pc_function_start (pc);
 
+	    /* A jump to the start of a function is (typically) a tail call.  */
+	    if (start == pc)
+	      return ftrace_new_tailcall (bfun, mfun, fun);
+
 	    /* If we can't determine the function for PC, we treat a jump at
-	       the end of the block as tail call.  */
-	    if (start == 0 || start == pc)
+	       the end of the block as tail call if we're switching functions
+	       and as an intra-function branch if we don't.  */
+	    if (start == 0 && ftrace_function_switched (bfun, mfun, fun))
 	      return ftrace_new_tailcall (bfun, mfun, fun);
+
+	    break;
 	  }
 	}
     }
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 4/5] btrace: preserve function level for unexpected returns
  2016-07-22  8:12 [PATCH 0/5] improve trace gap handling Markus Metzger
                   ` (3 preceding siblings ...)
  2016-07-22  8:12 ` [PATCH 2/5] btrace: allow leading trace gaps Markus Metzger
@ 2016-07-22  8:12 ` Markus Metzger
  2016-10-27 10:59 ` [PATCH 0/5] improve trace gap handling Yao Qi
  5 siblings, 0 replies; 11+ messages in thread
From: Markus Metzger @ 2016-07-22  8:12 UTC (permalink / raw)
  To: gdb-patches; +Cc: palves

When encountering a return for which we have not seen a corresponding call, GDB
starts a new back trace from level -1, i.e. from the level of the first function
in the trace.

In the presence of trace gaps, this may cause some rather big jump.

    (gdb) record function-call-history /c 192, +8
    192	                                          sbrk
    193	                                            brk
    194	                                              __x86.get_pc_thunk.bx
    195	                                            brk
    196	                                              __kernel_vsyscall
    197	[disabled]
    198	                                              __kernel_vsyscall
    199	        brk
    200	      sbrk

This doesn't help to make things more clear.  Let's remain on the same level
instead.

    (gdb) record function-call-history /c 192, +8
    192	      sbrk
    193	        brk
    194	          __x86.get_pc_thunk.bx
    195	        brk
    196	          __kernel_vsyscall
    197	[disabled]
    198	          __kernel_vsyscall
    199	        brk
    200	      sbrk

In this case it will look like we were able to connect the trace parts across
the disabled gap.  We were not.  More work is required to achieve this.

In the general case, the function-call history for the two trace parts won't
match.  They may be off by a few levels or they may be entirely different.  All
this patch does is to preserve the indentation level of the record
function-call-history command.

The disabled gap is caused by a sysenter not returning to the next instruction.

    (gdb) record function-call-history /i 196, +1
    196     __kernel_vsyscall       inst 66515,66519
    (gdb) record instruction-history 66515
    66515      0xb7fdcbf8 <__kernel_vsyscall+0>:    push   %ecx
    66516      0xb7fdcbf9 <__kernel_vsyscall+1>:    push   %edx
    66517      0xb7fdcbfa <__kernel_vsyscall+2>:    push   %ebp
    66518      0xb7fdcbfb <__kernel_vsyscall+3>:    mov    %esp,%ebp
    66519      0xb7fdcbfd <__kernel_vsyscall+5>:    sysenter
    [disabled]
    66520      0xb7fdcc08 <__kernel_vsyscall+16>:   pop    %ebp
    66521      0xb7fdcc09 <__kernel_vsyscall+17>:   pop    %edx
    66522      0xb7fdcc0a <__kernel_vsyscall+18>:   pop    %ecx
    66523      0xb7fdcc0b <__kernel_vsyscall+19>:   ret
    66524      0xb7e8e09e <brk+30>: xchg   %ecx,%ebx
    (gdb) disassemble 0xb7fdcbf8, 0xb7fdcc0c
    Dump of assembler code from 0xb7fdcbf8 to 0xb7fdcc0c:
       0xb7fdcbf8 <__kernel_vsyscall+0>:    push   %ecx
       0xb7fdcbf9 <__kernel_vsyscall+1>:    push   %edx
       0xb7fdcbfa <__kernel_vsyscall+2>:    push   %ebp
       0xb7fdcbfb <__kernel_vsyscall+3>:    mov    %esp,%ebp
       0xb7fdcbfd <__kernel_vsyscall+5>:    sysenter
       0xb7fdcbff <__kernel_vsyscall+7>:    nop
       0xb7fdcc00 <__kernel_vsyscall+8>:    nop
       0xb7fdcc01 <__kernel_vsyscall+9>:    nop
       0xb7fdcc02 <__kernel_vsyscall+10>:   nop
       0xb7fdcc03 <__kernel_vsyscall+11>:   nop
       0xb7fdcc04 <__kernel_vsyscall+12>:   nop
       0xb7fdcc05 <__kernel_vsyscall+13>:   nop
       0xb7fdcc06 <__kernel_vsyscall+14>:   int    $0x80
       0xb7fdcc08 <__kernel_vsyscall+16>:   pop    %ebp
       0xb7fdcc09 <__kernel_vsyscall+17>:   pop    %edx
       0xb7fdcc0a <__kernel_vsyscall+18>:   pop    %ecx
       0xb7fdcc0b <__kernel_vsyscall+19>:   ret
    End of assembler dump.

I've seen this on 32-bit Fedora 23.  I have not investigated what causes this
and whether we can avoid the gap in the first place.  Let's first try to make
GDB handle such gaps more gracefully.

2016-07-22  Markus Metzger  <markus.t.metzger@intel.com>

gdb/
	* btrace.c (ftrace_new_return): Start from the previous function's level
	if we can't find a matching call for a return.
---
 gdb/btrace.c | 22 +++++++++++++---------
 1 file changed, 13 insertions(+), 9 deletions(-)

diff --git a/gdb/btrace.c b/gdb/btrace.c
index e817f09..32036cf 100644
--- a/gdb/btrace.c
+++ b/gdb/btrace.c
@@ -386,16 +386,12 @@ ftrace_new_return (struct btrace_function *prev,
 	  /* There is no call in PREV's back trace.  We assume that the
 	     branch trace did not include it.  */
 
-	  /* Let's find the topmost call function - this skips tail calls.  */
+	  /* Let's find the topmost function and add a new caller for it.
+	     This should handle a series of initial tail calls.  */
 	  while (prev->up != NULL)
 	    prev = prev->up;
 
-	  /* We maintain levels for a series of returns for which we have
-	     not seen the calls.
-	     We start at the preceding function's level in case this has
-	     already been a return for which we have not seen the call.
-	     We start at level 0 otherwise, to handle tail calls correctly.  */
-	  bfun->level = min (0, prev->level) - 1;
+	  bfun->level = prev->level - 1;
 
 	  /* Fix up the call stack for PREV.  */
 	  ftrace_fixup_caller (prev, bfun, BFUN_UP_LINKS_TO_RET);
@@ -405,8 +401,16 @@ ftrace_new_return (struct btrace_function *prev,
       else
 	{
 	  /* There is a call in PREV's back trace to which we should have
-	     returned.  Let's remain at this level.  */
-	  bfun->level = prev->level;
+	     returned but didn't.  Let's start a new, separate back trace
+	     from PREV's level.  */
+	  bfun->level = prev->level - 1;
+
+	  /* We fix up the back trace for PREV but leave other function segments
+	     on the same level as they are.
+	     This should handle things like schedule () correctly where we're
+	     switching contexts.  */
+	  prev->up = bfun;
+	  prev->flags = BFUN_UP_LINKS_TO_RET;
 
 	  ftrace_debug (bfun, "new return - unknown caller");
 	}
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/5] improve trace gap handling
  2016-07-22  8:12 [PATCH 0/5] improve trace gap handling Markus Metzger
                   ` (4 preceding siblings ...)
  2016-07-22  8:12 ` [PATCH 4/5] btrace: preserve function level for unexpected returns Markus Metzger
@ 2016-10-27 10:59 ` Yao Qi
  2016-10-27 12:39   ` Metzger, Markus T
  5 siblings, 1 reply; 11+ messages in thread
From: Yao Qi @ 2016-10-27 10:59 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches, palves

Markus Metzger <markus.t.metzger@intel.com> writes:

> A small patch series that improves dealing with gaps in recorded trace.

I think you can self approve these btrace patches, unless this patch
series address Pedro or other people's review comments.  The cover
letter is too simple to link this patch series to previous reviews.

-- 
Yao (齐尧)

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH 0/5] improve trace gap handling
  2016-10-27 10:59 ` [PATCH 0/5] improve trace gap handling Yao Qi
@ 2016-10-27 12:39   ` Metzger, Markus T
  2016-10-27 15:04     ` Yao Qi
  0 siblings, 1 reply; 11+ messages in thread
From: Metzger, Markus T @ 2016-10-27 12:39 UTC (permalink / raw)
  To: Yao Qi; +Cc: gdb-patches, palves

> -----Original Message-----
> From: Yao Qi [mailto:qiyaoltc@gmail.com]
> Sent: Thursday, October 27, 2016 12:59 PM
> To: Metzger, Markus T <markus.t.metzger@intel.com>
> Cc: gdb-patches@sourceware.org; palves@redhat.com
> Subject: Re: [PATCH 0/5] improve trace gap handling

Hello Yao,

> > A small patch series that improves dealing with gaps in recorded trace.
> 
> I think you can self approve these btrace patches, unless this patch
> series address Pedro or other people's review comments.  The cover
> letter is too simple to link this patch series to previous reviews.

The series has not been reviewed, yet.  It does not refer to any previous
review comment.

Don't we want patches to be peer reviewed in general?  Or are you
saying that I can and should make changes to record-btrace without
review?

thanks,
markus.

Intel Deutschland GmbH
Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany
Tel: +49 89 99 8853-0, www.intel.de
Managing Directors: Christin Eisenschmid, Christian Lamprechter
Chairperson of the Supervisory Board: Nicole Lau
Registered Office: Munich
Commercial Register: Amtsgericht Muenchen HRB 186928

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/5] improve trace gap handling
  2016-10-27 12:39   ` Metzger, Markus T
@ 2016-10-27 15:04     ` Yao Qi
  2016-10-27 15:11       ` Pedro Alves
  0 siblings, 1 reply; 11+ messages in thread
From: Yao Qi @ 2016-10-27 15:04 UTC (permalink / raw)
  To: Metzger, Markus T; +Cc: gdb-patches, palves

On Thu, Oct 27, 2016 at 1:39 PM, Metzger, Markus T
<markus.t.metzger@intel.com> wrote:
>
> Don't we want patches to be peer reviewed in general?  Or are you
> saying that I can and should make changes to record-btrace without
> review?

No, I am not saying that... :-)  Peer review is always welcome.  As we
said in MAINTAINERS:

"All maintainers are encouraged to post major patches to the gdb-patches
mailing list for comments, even if they have the authority to commit the
patch without review from another maintainer."

You, as a "responsible maintainer" for btrace, can/should review all
patches in the area of btrace, including patches written by yourself.

I think all these rules are of a purpose of having a healthy code base
with an efficient way.  It helps nothing to block patches for three
months due to lack of peer review.

You must post your patches for review, and you have the authority
to approve the btrace bits.  You can leave your patches for a period
of time, one week for example, in mail list to collect comments and
objections.

-- 
Yao (齐尧)

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/5] improve trace gap handling
  2016-10-27 15:04     ` Yao Qi
@ 2016-10-27 15:11       ` Pedro Alves
  2016-10-28  7:11         ` Metzger, Markus T
  0 siblings, 1 reply; 11+ messages in thread
From: Pedro Alves @ 2016-10-27 15:11 UTC (permalink / raw)
  To: Yao Qi, Metzger, Markus T; +Cc: gdb-patches

On 10/27/2016 04:03 PM, Yao Qi wrote:
> On Thu, Oct 27, 2016 at 1:39 PM, Metzger, Markus T
> <markus.t.metzger@intel.com> wrote:
>>
>> Don't we want patches to be peer reviewed in general?  Or are you
>> saying that I can and should make changes to record-btrace without
>> review?
> 
> No, I am not saying that... :-)  Peer review is always welcome.  As we
> said in MAINTAINERS:
> 
> "All maintainers are encouraged to post major patches to the gdb-patches
> mailing list for comments, even if they have the authority to commit the
> patch without review from another maintainer."
> 
> You, as a "responsible maintainer" for btrace, can/should review all
> patches in the area of btrace, including patches written by yourself.
> 
> I think all these rules are of a purpose of having a healthy code base
> with an efficient way.  It helps nothing to block patches for three
> months due to lack of peer review.
> 
> You must post your patches for review, and you have the authority
> to approve the btrace bits.  You can leave your patches for a period
> of time, one week for example, in mail list to collect comments and
> objections.
> 

I definitely agree.  It's because we trust you and think you're
competent that we made you btrace maintainer.  :-)

FWIW, I've quickly skimmed the patches now looking for something
that I might even have input on, and I found nothing.  Regarding
style and following GDB practices, I think your patches are
consistently perfect.

Thanks,
Pedro Alves

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH 0/5] improve trace gap handling
  2016-10-27 15:11       ` Pedro Alves
@ 2016-10-28  7:11         ` Metzger, Markus T
  0 siblings, 0 replies; 11+ messages in thread
From: Metzger, Markus T @ 2016-10-28  7:11 UTC (permalink / raw)
  To: Pedro Alves, Yao Qi; +Cc: gdb-patches

> -----Original Message-----
> From: Pedro Alves [mailto:palves@redhat.com]
> Sent: Thursday, October 27, 2016 5:11 PM
> To: Yao Qi <qiyaoltc@gmail.com>; Metzger, Markus T
> <markus.t.metzger@intel.com>
> Cc: gdb-patches@sourceware.org
> Subject: Re: [PATCH 0/5] improve trace gap handling
> 
> On 10/27/2016 04:03 PM, Yao Qi wrote:
> > On Thu, Oct 27, 2016 at 1:39 PM, Metzger, Markus T
> > <markus.t.metzger@intel.com> wrote:
> >>
> >> Don't we want patches to be peer reviewed in general?  Or are you
> >> saying that I can and should make changes to record-btrace without
> >> review?
> >
> > No, I am not saying that... :-)  Peer review is always welcome.  As we
> > said in MAINTAINERS:
> >
> > "All maintainers are encouraged to post major patches to the gdb-patches
> > mailing list for comments, even if they have the authority to commit the
> > patch without review from another maintainer."
> >
> > You, as a "responsible maintainer" for btrace, can/should review all
> > patches in the area of btrace, including patches written by yourself.
> >
> > I think all these rules are of a purpose of having a healthy code base
> > with an efficient way.  It helps nothing to block patches for three
> > months due to lack of peer review.
> >
> > You must post your patches for review, and you have the authority
> > to approve the btrace bits.  You can leave your patches for a period
> > of time, one week for example, in mail list to collect comments and
> > objections.
> >
> 
> I definitely agree.  It's because we trust you and think you're
> competent that we made you btrace maintainer.  :-)
> 
> FWIW, I've quickly skimmed the patches now looking for something
> that I might even have input on, and I found nothing.  Regarding
> style and following GDB practices, I think your patches are
> consistently perfect.

OK, thanks.  Then I'm going to self-approve this patch series.  Thanks,
Pedro, for skimming over it.  I have another small fix I'm going to post
for a week or two - it's almost obvious.

The other patch series I have posted some time ago contains changes
to infrun, record (full), and the MI test suite.  It generally impacts the
behavior of MI.  I'm going to ping this one again.

Regards,
Markus.

Intel Deutschland GmbH
Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany
Tel: +49 89 99 8853-0, www.intel.de
Managing Directors: Christin Eisenschmid, Christian Lamprechter
Chairperson of the Supervisory Board: Nicole Lau
Registered Office: Munich
Commercial Register: Amtsgericht Muenchen HRB 186928

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2016-10-28  7:11 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-22  8:12 [PATCH 0/5] improve trace gap handling Markus Metzger
2016-07-22  8:12 ` [PATCH 5/5] btrace: bridge gaps Markus Metzger
2016-07-22  8:12 ` [PATCH 1/5] btrace: fix gap indication Markus Metzger
2016-07-22  8:12 ` [PATCH 3/5] btrace: update tail call heuristic Markus Metzger
2016-07-22  8:12 ` [PATCH 2/5] btrace: allow leading trace gaps Markus Metzger
2016-07-22  8:12 ` [PATCH 4/5] btrace: preserve function level for unexpected returns Markus Metzger
2016-10-27 10:59 ` [PATCH 0/5] improve trace gap handling Yao Qi
2016-10-27 12:39   ` Metzger, Markus T
2016-10-27 15:04     ` Yao Qi
2016-10-27 15:11       ` Pedro Alves
2016-10-28  7:11         ` Metzger, Markus T

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).