public inbox for gdb-patches@sourceware.org
 help / color / mirror / Atom feed
* [patch v4 24/24] record-btrace: skip tail calls in back trace
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
  2013-07-03  9:14 ` [patch v4 05/24] record-btrace: start counting at one Markus Metzger
@ 2013-07-03  9:14 ` Markus Metzger
  2013-08-18 19:10   ` Jan Kratochvil
  2013-07-03  9:14 ` [patch v4 20/24] btrace, gdbserver: read branch trace incrementally Markus Metzger
                   ` (22 subsequent siblings)
  24 siblings, 1 reply; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:14 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches

The branch trace represents the caller/callee relationship of tail calls.  The
caller of a tail call is shown in the back trace and in the function-call
history.

This is not consistent with GDB's normal behavior, where the tail caller is not
shown in the back trace.
It further causes the finish command to fail for tail calls.

This patch skips tail calls when computing the back trace during replay.  The
finish command now works also for tail calls.

The tail caller is still shown in the function-call history.

I'm not sure which is the better behavior.  I liked seeing the tail caller in
the call stack and I'm not using the finish command very often.  On the other
hand, reverse/replay should be as close to live debugging as possible.

2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>

	* record-btrace.c (record_btrace_frame_sniffer): Skip tail calls.

testsuite/
	* gdb.btrace/tailcall.exp: Update.  Add stepping tests.
	* gdb.btrace/rn-dl-bind.c: New.
	* gdb.btrace/rn-dl-bind.exp: New.


---
 gdb/record-btrace.c                     |   15 ++++++----
 gdb/testsuite/gdb.btrace/rn-dl-bind.c   |   37 +++++++++++++++++++++++
 gdb/testsuite/gdb.btrace/rn-dl-bind.exp |   48 +++++++++++++++++++++++++++++++
 gdb/testsuite/gdb.btrace/tailcall.exp   |   25 +++++++++++++--
 4 files changed, 115 insertions(+), 10 deletions(-)
 create mode 100644 gdb/testsuite/gdb.btrace/rn-dl-bind.c
 create mode 100644 gdb/testsuite/gdb.btrace/rn-dl-bind.exp

diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
index b45a5fb..9feda30 100644
--- a/gdb/record-btrace.c
+++ b/gdb/record-btrace.c
@@ -1026,7 +1026,7 @@ record_btrace_frame_this_id (struct frame_info *this_frame, void **this_cache,
   cache = *this_cache;
 
   stack = 0;
-  code = get_frame_func (this_frame);
+  code = cache->pc;
   special = (CORE_ADDR) cache->bfun;
 
   *this_id = frame_id_build_special (stack, code, special);
@@ -1120,6 +1120,13 @@ record_btrace_frame_sniffer (const struct frame_unwind *self,
   caller = bfun->up;
   pc = 0;
 
+  /* Skip tail calls.  */
+  while (caller != NULL && (bfun->flags & BFUN_UP_LINKS_TO_TAILCALL) != 0)
+    {
+      bfun = caller;
+      caller = bfun->up;
+    }
+
   /* Determine where to find the PC in the upper function segment.  */
   if (caller != NULL)
     {
@@ -1133,11 +1140,7 @@ record_btrace_frame_sniffer (const struct frame_unwind *self,
 	  insn = VEC_last (btrace_insn_s, caller->insn);
 	  pc = insn->pc;
 
-	  /* We link directly to the jump instruction in the case of a tail
-	     call, since the next instruction will likely be outside of the
-	     caller function.  */
-	  if ((bfun->flags & BFUN_UP_LINKS_TO_TAILCALL) == 0)
-	    pc += gdb_insn_length (get_frame_arch (this_frame), pc);
+	  pc += gdb_insn_length (get_frame_arch (this_frame), pc);
 	}
 
       DEBUG ("[frame] sniffed frame for %s on level %d",
diff --git a/gdb/testsuite/gdb.btrace/rn-dl-bind.c b/gdb/testsuite/gdb.btrace/rn-dl-bind.c
new file mode 100644
index 0000000..4930297
--- /dev/null
+++ b/gdb/testsuite/gdb.btrace/rn-dl-bind.c
@@ -0,0 +1,37 @@
+/* This testcase is part of GDB, the GNU debugger.
+
+   Copyright 2013 Free Software Foundation, Inc.
+
+   Contributed by Intel Corp. <markus.t.metzger@intel.com>
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+#include <stdlib.h>
+
+int test (void)
+{
+  int ret;
+
+  ret = strtoul ("42", NULL, 10);	/* test.1 */
+  return ret;				/* test.2 */
+}					/* test.3 */
+
+int
+main (void)
+{
+  int ret;
+
+  ret = test ();			/* main.1 */
+  return ret;				/* main.2 */
+}					/* main.3 */
diff --git a/gdb/testsuite/gdb.btrace/rn-dl-bind.exp b/gdb/testsuite/gdb.btrace/rn-dl-bind.exp
new file mode 100644
index 0000000..4d803f9
--- /dev/null
+++ b/gdb/testsuite/gdb.btrace/rn-dl-bind.exp
@@ -0,0 +1,48 @@
+# This testcase is part of GDB, the GNU debugger.
+#
+# Copyright 2013 Free Software Foundation, Inc.
+#
+# Contributed by Intel Corp. <markus.t.metzger@intel.com>
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# check for btrace support
+if { [skip_btrace_tests] } { return -1 }
+
+# start inferior
+standard_testfile
+if [prepare_for_testing $testfile.exp $testfile $srcfile {c++ debug}] {
+    return -1
+}
+if ![runto_main] {
+    return -1
+}
+
+# trace the code for the call to test
+gdb_test_no_output "record btrace" "rn-dl-bind, 0.1"
+gdb_test "next" ".*main\.2.*" "rn-dl-bind, 0.2"
+
+# just dump the function-call-history to help debugging
+gdb_test_no_output "set record function-call-history-size 0" "rn-dl-bind, 0.3"
+gdb_test "record function-call-history /cli 1" ".*" "rn-dl-bind, 0.4"
+
+# check that we can reverse-next and next
+gdb_test "reverse-next" ".*main\.1.*" "rn-dl-bind, 1.1"
+gdb_test "next" ".*main\.2.*" "rn-dl-bind, 1.2"
+
+# now go into test and try to reverse-next and next over the library call
+gdb_test "reverse-step" ".*test\.3.*" "rn-dl-bind, 2.1"
+gdb_test "reverse-step" ".*test\.2.*" "rn-dl-bind, 2.2"
+gdb_test "reverse-next" ".*test\.1.*" "rn-dl-bind, 2.3"
+gdb_test "next" ".*test\.2.*" "rn-dl-bind, 2.4"
diff --git a/gdb/testsuite/gdb.btrace/tailcall.exp b/gdb/testsuite/gdb.btrace/tailcall.exp
index 5cadee0..df8d66a 100644
--- a/gdb/testsuite/gdb.btrace/tailcall.exp
+++ b/gdb/testsuite/gdb.btrace/tailcall.exp
@@ -57,12 +57,29 @@ gdb_test "record goto 4" "
 # check the backtrace
 gdb_test "backtrace" "
 #0.*bar.*at .*x86-tailcall.c:24.*\r
-#1.*foo.*at .*x86-tailcall.c:29.*\r
-#2.*main.*at .*x86-tailcall.c:37.*\r
+#1.*main.*at .*x86-tailcall.c:37.*\r
 Backtrace stopped: not enough registers or memory available to unwind further" "backtrace in bar"
 
 # walk the backtrace
 gdb_test "up" "
-.*foo \\(\\) at .*x86-tailcall.c:29.*" "up to foo"
-gdb_test "up" "
 .*main \\(\\) at .*x86-tailcall.c:37.*" "up to main"
+gdb_test "down" "
+#0.*bar.*at .*x86-tailcall.c:24.*" "down to bar"
+
+# test stepping into and out of tailcalls.
+gdb_test "finish" "
+.*main.*at .*x86-tailcall.c:37.*" "step, 1.1"
+gdb_test "reverse-step" "
+.*bar.*at .*x86-tailcall.c:24.*" "step, 1.2"
+gdb_test "reverse-finish" "
+.*foo \\(\\) at .*x86-tailcall.c:29.*" "step, 1.3"
+gdb_test "reverse-step" "
+.*main.*at .*x86-tailcall.c:37.*" "step, 1.4"
+gdb_test "next" "
+.*main.*at .*x86-tailcall.c:39.*" "step, 1.5"
+gdb_test "reverse-next" "
+.*main.*at .*x86-tailcall.c:37.*" "step, 1.6"
+gdb_test "step" "
+.*foo \\(\\) at .*x86-tailcall.c:29.*" "step, 1.7"
+gdb_test "finish" "
+.*main.*at .*x86-tailcall.c:37.*" "step, 1.8"
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 16/24] record-btrace: provide target_find_new_threads method
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
                   ` (6 preceding siblings ...)
  2013-07-03  9:14 ` [patch v4 08/24] record-btrace: make ranges include begin and end Markus Metzger
@ 2013-07-03  9:14 ` Markus Metzger
  2013-08-18 19:15   ` Jan Kratochvil
  2013-07-03  9:14 ` [patch v4 11/24] record-btrace: supply register target methods Markus Metzger
                   ` (16 subsequent siblings)
  24 siblings, 1 reply; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:14 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches

The "info threads" command tries to read memory, which is not possible during
replay.  This results in an error message and aborts the command without showing
the existing threads.

Provide a to_find_new_threads target method to skip the search while replaying.

2013-07-03  Markus Metzger <markus.t.metzger@intel.com>

	* record-btrace.c (record_btrace_find_new_threads): New.
	(init_record_btrace_ops): Initialize to_find_new_threads.


---
 gdb/record-btrace.c |   19 +++++++++++++++++++
 1 files changed, 19 insertions(+), 0 deletions(-)

diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
index 430296a..2b552d5 100644
--- a/gdb/record-btrace.c
+++ b/gdb/record-btrace.c
@@ -1005,6 +1005,24 @@ record_btrace_wait (struct target_ops *ops, ptid_t ptid,
   error (_("You can't do this from here.  Do 'record goto end', first."));
 }
 
+/* The to_find_new_threads method of target record-btrace.  */
+
+static void
+record_btrace_find_new_threads (struct target_ops *ops)
+{
+  /* Don't expect new threads if we're replaying.  */
+  if (record_btrace_is_replaying ())
+    return;
+
+  /* Forward the request.  */
+  for (ops = ops->beneath; ops != NULL; ops = ops->beneath)
+    if (ops->to_find_new_threads != NULL)
+      {
+	ops->to_find_new_threads (ops);
+	break;
+      }
+}
+
 /* Initialize the record-btrace target ops.  */
 
 static void
@@ -1039,6 +1057,7 @@ init_record_btrace_ops (void)
   ops->to_get_unwinder = &record_btrace_frame_unwind;
   ops->to_resume = record_btrace_resume;
   ops->to_wait = record_btrace_wait;
+  ops->to_find_new_threads = record_btrace_find_new_threads;
   ops->to_stratum = record_stratum;
   ops->to_magic = OPS_MAGIC;
 }
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 19/24] btrace, linux: fix memory leak when reading branch trace
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
                   ` (9 preceding siblings ...)
  2013-07-03  9:14 ` [patch v4 02/24] record: upcase record_print_flag enumeration constants Markus Metzger
@ 2013-07-03  9:14 ` Markus Metzger
  2013-08-18 19:09   ` Jan Kratochvil
  2013-07-03  9:14 ` [patch v4 03/24] btrace: change branch trace data structure Markus Metzger
                   ` (13 subsequent siblings)
  24 siblings, 1 reply; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:14 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches

When it takes more than one iteration to read the BTS trace, the trace from the
previous iteration is leaked.  Fix it.

2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>

	* common/linux-btrace.c (linux_read_btrace): Free trace from
	previous iteration.


---
 gdb/common/linux-btrace.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/gdb/common/linux-btrace.c b/gdb/common/linux-btrace.c
index 4880f41..b30a6ec 100644
--- a/gdb/common/linux-btrace.c
+++ b/gdb/common/linux-btrace.c
@@ -522,6 +522,9 @@ linux_read_btrace (struct btrace_target_info *tinfo,
     {
       data_head = header->data_head;
 
+      /* Delete any leftover trace from the previous iteration.  */
+      VEC_truncate (btrace_block_s, btrace, 0);
+
       /* If there's new trace, let's read it.  */
       if (data_head != tinfo->data_head)
 	{
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 03/24] btrace: change branch trace data structure
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
                   ` (10 preceding siblings ...)
  2013-07-03  9:14 ` [patch v4 19/24] btrace, linux: fix memory leak when reading branch trace Markus Metzger
@ 2013-07-03  9:14 ` Markus Metzger
  2013-08-18 19:05   ` Jan Kratochvil
  2013-07-03  9:14 ` [patch v4 09/24] btrace: add replay position to btrace thread info Markus Metzger
                   ` (12 subsequent siblings)
  24 siblings, 1 reply; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:14 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches, Christian Himpel

The branch trace is represented as 3 vectors:
  - a block vector
  - a instruction vector
  - a function vector

Each vector (except for the first) is computed from the one above.

Change this into a graph where a node represents a sequence of instructions
belonging to the same function and where we have three types of edges to connect
the function segments:
  - control flow
  - same function (instance)
  - call stack

This allows us to navigate in the branch trace.  We will need this for "record
goto" and reverse execution.

This patch introduces the data structure and computes the control flow edges.
It also introduces iterator structs to simplify iterating over the branch trace
in control-flow order.

It also fixes PR gdb/15240 since now recursive calls are handled correctly.
Fix the test that got the number of expected fib instances and also the
function numbers wrong.

The current instruction had been part of the branch trace.  This will look odd
once we start support for reverse execution.  Remove it.  We still keep it in
the trace itself to allow extending the branch trace more easily in the future.

CC: Christian Himpel <christian.himpel@intel.com>
2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>

	* btrace.h (struct btrace_func_link): New.
	(enum btrace_function_flag): New.
	(struct btrace_inst): Rename to ...
	(struct btrace_insn): ...this. Update all users.
	(struct btrace_func) <ibegin, iend>: Remove.
	(struct btrace_func_link): New.
	(struct btrace_func): Rename to ...
	(struct btrace_function): ...this. Update all users.
	(struct btrace_function) <segment, flow, up, insn, insn_offset,
	number, level, flags>: New.
	(struct btrace_insn_iterator): Rename to ...
	(struct btrace_insn_history): ...this.
	Update all users.
	(struct btrace_insn_iterator, btrace_call_iterator): New.
	(struct btrace_target_info) <btrace, itrace, ftrace>: Remove.
	(struct btrace_target_info) <begin, end, level,
	insn_history, call_history>: New.
	(btrace_insn_get, btrace_insn_number, btrace_insn_begin,
	btrace_insn_end, btrace_insn_prev, btrace_insn_next,
	btrace_insn_cmp, btrace_find_insn_by_number, btrace_call_get,
	btrace_call_number, btrace_call_begin, btrace_call_end,
	btrace_call_prev, btrace_call_next, btrace_call_cmp,
	btrace_find_function_by_number, btrace_set_insn_history,
	btrace_set_call_history): New.
	* btrace.c (btrace_init_insn_iterator,
	btrace_init_func_iterator, compute_itrace): Remove.
	(ftrace_print_function_name, ftrace_print_filename,
	ftrace_skip_file): Change
	parameter to const.
	(ftrace_init_func): Remove.
	(ftrace_debug): Use new btrace_function fields.
	(ftrace_function_switched): Also consider gaining and
	losing symbol information).
	(ftrace_print_insn_addr, ftrace_new_call, ftrace_new_return,
	ftrace_new_switch, ftrace_find_caller, ftrace_new_function,
	ftrace_update_caller, ftrace_fixup_caller, ftrace_new_tailcall):
	New.
	(ftrace_new_function): Move. Remove debug print.
	(ftrace_update_lines, ftrace_update_insns): New.
	(ftrace_update_function): Check for call, ret, and jump.
	(compute_ftrace): Renamed to ...
	(btrace_compute_ftrace): ...this. Rewritten to compute call
	stack.
	(btrace_fetch, btrace_clear): Updated.
	(btrace_insn_get, btrace_insn_number, btrace_insn_begin,
	btrace_insn_end, btrace_insn_prev, btrace_insn_next,
	btrace_insn_cmp, btrace_find_insn_by_number, btrace_call_get,
	btrace_call_number, btrace_call_begin, btrace_call_end,
	btrace_call_prev, btrace_call_next, btrace_call_cmp,
	btrace_find_function_by_number, btrace_set_insn_history,
	btrace_set_call_history): New.
	* record-btrace.c (require_btrace): Use new btrace thread
	info fields.
	(record_btrace_info, btrace_insn_history,
	record_btrace_insn_history, record_btrace_insn_history_range):
	Use new btrace thread info fields and new iterator.
	(btrace_func_history_src_line): Rename to ...
	(btrace_call_history_src_line): ...this. Use new btrace
	thread info fields.
	(btrace_func_history): Rename to ...
	(btrace_call_history): ...this. Use new btrace thread info
	fields and new iterator.
	(record_btrace_call_history, record_btrace_call_history_range):
	Use new btrace thread info fields and new iterator.

testsuite/
	* gdb.btrace/function_call_history.exp: Fix expected function
	trace.


---
 gdb/btrace.c                                       | 1186 +++++++++++++++++---
 gdb/btrace.h                                       |  230 ++++-
 gdb/record-btrace.c                                |  342 +++---
 gdb/testsuite/gdb.btrace/function_call_history.exp |   28 +-
 gdb/testsuite/gdb.btrace/instruction_history.exp   |   12 +-
 5 files changed, 1405 insertions(+), 393 deletions(-)

diff --git a/gdb/btrace.c b/gdb/btrace.c
index 3230a3e..53549db 100644
--- a/gdb/btrace.c
+++ b/gdb/btrace.c
@@ -45,92 +45,11 @@
 
 #define DEBUG_FTRACE(msg, args...) DEBUG ("[ftrace] " msg, ##args)
 
-/* Initialize the instruction iterator.  */
-
-static void
-btrace_init_insn_iterator (struct btrace_thread_info *btinfo)
-{
-  DEBUG ("init insn iterator");
-
-  btinfo->insn_iterator.begin = 1;
-  btinfo->insn_iterator.end = 0;
-}
-
-/* Initialize the function iterator.  */
-
-static void
-btrace_init_func_iterator (struct btrace_thread_info *btinfo)
-{
-  DEBUG ("init func iterator");
-
-  btinfo->func_iterator.begin = 1;
-  btinfo->func_iterator.end = 0;
-}
-
-/* Compute the instruction trace from the block trace.  */
-
-static VEC (btrace_inst_s) *
-compute_itrace (VEC (btrace_block_s) *btrace)
-{
-  VEC (btrace_inst_s) *itrace;
-  struct gdbarch *gdbarch;
-  unsigned int b;
-
-  DEBUG ("compute itrace");
-
-  itrace = NULL;
-  gdbarch = target_gdbarch ();
-  b = VEC_length (btrace_block_s, btrace);
-
-  while (b-- != 0)
-    {
-      btrace_block_s *block;
-      CORE_ADDR pc;
-
-      block = VEC_index (btrace_block_s, btrace, b);
-      pc = block->begin;
-
-      /* Add instructions for this block.  */
-      for (;;)
-	{
-	  btrace_inst_s *inst;
-	  int size;
-
-	  /* We should hit the end of the block.  Warn if we went too far.  */
-	  if (block->end < pc)
-	    {
-	      warning (_("Recorded trace may be corrupted."));
-	      break;
-	    }
-
-	  inst = VEC_safe_push (btrace_inst_s, itrace, NULL);
-	  inst->pc = pc;
-
-	  /* We're done once we pushed the instruction at the end.  */
-	  if (block->end == pc)
-	    break;
-
-	  size = gdb_insn_length (gdbarch, pc);
-
-	  /* Make sure we terminate if we fail to compute the size.  */
-	  if (size <= 0)
-	    {
-	      warning (_("Recorded trace may be incomplete."));
-	      break;
-	    }
-
-	  pc += size;
-	}
-    }
-
-  return itrace;
-}
-
 /* Return the function name of a recorded function segment for printing.
    This function never returns NULL.  */
 
 static const char *
-ftrace_print_function_name (struct btrace_func *bfun)
+ftrace_print_function_name (const struct btrace_function *bfun)
 {
   struct minimal_symbol *msym;
   struct symbol *sym;
@@ -151,7 +70,7 @@ ftrace_print_function_name (struct btrace_func *bfun)
    This function never returns NULL.  */
 
 static const char *
-ftrace_print_filename (struct btrace_func *bfun)
+ftrace_print_filename (const struct btrace_function *bfun)
 {
   struct symbol *sym;
   const char *filename;
@@ -166,44 +85,53 @@ ftrace_print_filename (struct btrace_func *bfun)
   return filename;
 }
 
-/* Print an ftrace debug status message.  */
+/* Print the address of an instruction.
+   This function never returns NULL.  */
 
-static void
-ftrace_debug (struct btrace_func *bfun, const char *prefix)
+static const char *
+ftrace_print_insn_addr (const struct btrace_insn *insn)
 {
-  DEBUG_FTRACE ("%s: fun = %s, file = %s, lines = [%d; %d], insn = [%u; %u]",
-		prefix, ftrace_print_function_name (bfun),
-		ftrace_print_filename (bfun), bfun->lbegin, bfun->lend,
-		bfun->ibegin, bfun->iend);
+  if (insn == NULL)
+    return "<nil>";
+
+  return core_addr_to_string_nz (insn->pc);
 }
 
-/* Initialize a recorded function segment.  */
+/* Print an ftrace debug status message.  */
 
 static void
-ftrace_init_func (struct btrace_func *bfun, struct minimal_symbol *mfun,
-		  struct symbol *fun, unsigned int idx)
+ftrace_debug (const struct btrace_function *bfun, const char *prefix)
 {
-  bfun->msym = mfun;
-  bfun->sym = fun;
-  bfun->lbegin = INT_MAX;
-  bfun->lend = 0;
-  bfun->ibegin = idx;
-  bfun->iend = idx;
+  const char *fun, *file;
+  unsigned int ibegin, iend;
+  int lbegin, lend, level;
+
+  fun = ftrace_print_function_name (bfun);
+  file = ftrace_print_filename (bfun);
+  level = bfun->level;
+
+  lbegin = bfun->lbegin;
+  lend = bfun->lend;
+
+  ibegin = bfun->insn_offset;
+  iend = ibegin + VEC_length (btrace_insn_s, bfun->insn);
+
+  DEBUG_FTRACE ("%s: fun = %s, file = %s, level = %d, lines = [%d; %d], "
+		"insn = [%u; %u)", prefix, fun, file, level, lbegin, lend,
+		ibegin, iend);
 }
 
-/* Check whether the function has changed.  */
+/* Return non-zero if BFUN does not match MFUN and FUN;
+   return zero, otherwise.  */
 
 static int
-ftrace_function_switched (struct btrace_func *bfun,
-			  struct minimal_symbol *mfun, struct symbol *fun)
+ftrace_function_switched (const struct btrace_function *bfun,
+			  const struct minimal_symbol *mfun,
+			  const struct symbol *fun)
 {
   struct minimal_symbol *msym;
   struct symbol *sym;
 
-  /* The function changed if we did not have one before.  */
-  if (bfun == NULL)
-    return 1;
-
   msym = bfun->msym;
   sym = bfun->sym;
 
@@ -228,15 +156,24 @@ ftrace_function_switched (struct btrace_func *bfun,
 	return 1;
     }
 
+  /* If we lost symbol information, we switched functions.  */
+  if (!(msym == NULL && sym == NULL) && mfun == NULL && fun == NULL)
+    return 1;
+
+  /* If we gained symbol information, we switched functions.  */
+  if (msym == NULL && sym == NULL && !(mfun == NULL && fun == NULL))
+    return 1;
+
   return 0;
 }
 
-/* Check if we should skip this file when generating the function call
-   history.  We would want to do that if, say, a macro that is defined
-   in another file is expanded in this function.  */
+/* Return non-zero if we should skip this file when generating the function
+   call history; zero, otherwise.
+   We would want to do that if, say, a macro that is defined in another file
+   is expanded in this function.  */
 
 static int
-ftrace_skip_file (struct btrace_func *bfun, const char *filename)
+ftrace_skip_file (const struct btrace_function *bfun, const char *fullname)
 {
   struct symbol *sym;
   const char *bfile;
@@ -248,89 +185,477 @@ ftrace_skip_file (struct btrace_func *bfun, const char *filename)
   else
     bfile = "";
 
-  if (filename == NULL)
-    filename = "";
+  if (fullname == NULL)
+    fullname = "";
 
-  return (filename_cmp (bfile, filename) != 0);
+  return (filename_cmp (bfile, fullname) != 0);
 }
 
-/* Compute the function trace from the instruction trace.  */
+/* Allocate and initialize a new branch trace function segment.
+   PREV is the chronologically preceding function segment.
+   MFUN and FUN are the symbol information we have for this function.  */
 
-static VEC (btrace_func_s) *
-compute_ftrace (VEC (btrace_inst_s) *itrace)
+static struct btrace_function *
+ftrace_new_function (struct btrace_function *prev,
+		     struct minimal_symbol *mfun,
+		     struct symbol *fun)
 {
-  VEC (btrace_func_s) *ftrace;
-  struct btrace_inst *binst;
-  struct btrace_func *bfun;
-  unsigned int idx;
+  struct btrace_function *bfun;
 
-  DEBUG ("compute ftrace");
+  bfun = xzalloc (sizeof (*bfun));
+
+  bfun->msym = mfun;
+  bfun->sym = fun;
+  bfun->flow.prev = prev;
+
+  /* We start with the identities of min and max, respectively.  */
+  bfun->lbegin = INT_MAX;
+  bfun->lend = INT_MIN;
+
+  if (prev != NULL)
+    {
+      gdb_assert (prev->flow.next == NULL);
+      prev->flow.next = bfun;
+
+      bfun->number = prev->number + 1;
+      bfun->insn_offset = (prev->insn_offset
+			   + VEC_length (btrace_insn_s, prev->insn));
+    }
+
+  return bfun;
+}
+
+/* Update the UP field of a function segment.  */
 
-  ftrace = NULL;
-  bfun = NULL;
+static void
+ftrace_update_caller (struct btrace_function *bfun,
+		      struct btrace_function *caller,
+		      unsigned int flags)
+{
+  if (bfun->up != NULL)
+    ftrace_debug (bfun, "updating caller");
+
+  bfun->up = caller;
+  bfun->flags = flags;
+
+  ftrace_debug (bfun, "set caller");
+}
+
+/* Fix up the caller for a function segment.  */
 
-  for (idx = 0; VEC_iterate (btrace_inst_s, itrace, idx, binst); ++idx)
+static void
+ftrace_fixup_caller (struct btrace_function *bfun,
+		     struct btrace_function *caller,
+		     unsigned int flags)
+{
+  struct btrace_function *prev, *next;
+
+  ftrace_update_caller (bfun, caller, flags);
+
+  /* Update all function segments belonging to the same function.  */
+  for (prev = bfun->segment.prev; prev != NULL; prev = prev->segment.prev)
+    ftrace_update_caller (prev, caller, flags);
+
+  for (next = bfun->segment.next; next != NULL; next = next->segment.next)
+    ftrace_update_caller (next, caller, flags);
+}
+
+/* Add a new function segment for a call.
+   CALLER is the chronologically preceding function segment.
+   MFUN and FUN are the symbol information we have for this function.  */
+
+static struct btrace_function *
+ftrace_new_call (struct btrace_function *caller,
+		 struct minimal_symbol *mfun,
+		 struct symbol *fun)
+{
+  struct btrace_function *bfun;
+
+  bfun = ftrace_new_function (caller, mfun, fun);
+  bfun->up = caller;
+  bfun->level = caller->level + 1;
+
+  ftrace_debug (bfun, "new call");
+
+  return bfun;
+}
+
+/* Add a new function segment for a tail call.
+   CALLER is the chronologically preceding function segment.
+   MFUN and FUN are the symbol information we have for this function.  */
+
+static struct btrace_function *
+ftrace_new_tailcall (struct btrace_function *caller,
+		     struct minimal_symbol *mfun,
+		     struct symbol *fun)
+{
+  struct btrace_function *bfun;
+
+  bfun = ftrace_new_function (caller, mfun, fun);
+  bfun->up = caller;
+  bfun->level = caller->level + 1;
+  bfun->flags |= BFUN_UP_LINKS_TO_TAILCALL;
+
+  ftrace_debug (bfun, "new tail call");
+
+  return bfun;
+}
+
+/* Find the innermost caller in the back trace of BFUN with MFUN/FUN
+   symbol information.  */
+
+static struct btrace_function *
+ftrace_find_caller (struct btrace_function *bfun,
+		    struct minimal_symbol *mfun,
+		    struct symbol *fun)
+{
+  for (; bfun != NULL; bfun = bfun->up)
     {
-      struct symtab_and_line sal;
-      struct bound_minimal_symbol mfun;
-      struct symbol *fun;
-      const char *filename;
+      /* Skip functions with incompatible symbol information.  */
+      if (ftrace_function_switched (bfun, mfun, fun))
+	continue;
+
+      /* This is the function segment we're looking for.  */
+      break;
+    }
+
+  return bfun;
+}
+
+/* Find the innermost caller in the back trace of BFUN, skipping all
+   function segments that do not end with a call instruction (e.g.
+   tail calls ending with a jump).  */
+
+static struct btrace_function *
+ftrace_find_call (struct gdbarch *gdbarch, struct btrace_function *bfun)
+{
+  for (; bfun != NULL; bfun = bfun->up)
+    {
+      struct btrace_insn *last;
       CORE_ADDR pc;
 
-      pc = binst->pc;
+      /* We do not allow empty function segments.  */
+      gdb_assert (!VEC_empty (btrace_insn_s, bfun->insn));
 
-      /* Try to determine the function we're in.  We use both types of symbols
-	 to avoid surprises when we sometimes get a full symbol and sometimes
-	 only a minimal symbol.  */
-      fun = find_pc_function (pc);
-      mfun = lookup_minimal_symbol_by_pc (pc);
+      last = VEC_last (btrace_insn_s, bfun->insn);
+      pc = last->pc;
+
+      if (gdbarch_insn_is_call (gdbarch, pc))
+	break;
+    }
+
+  return bfun;
+}
+
+/* Add a new function segment for a return.
+   PREV is the chronologically preceding function segment.
+   MFUN and FUN are the symbol information we have for this function.  */
+
+static struct btrace_function *
+ftrace_new_return (struct gdbarch *gdbarch,
+		   struct btrace_function *prev,
+		   struct minimal_symbol *mfun,
+		   struct symbol *fun)
+{
+  struct btrace_function *bfun, *caller;
 
-      if (fun == NULL && mfun.minsym == NULL)
+  bfun = ftrace_new_function (prev, mfun, fun);
+
+  /* It is important to start at PREV's caller.  Otherwise, we might find
+     PREV itself, if PREV is a recursive function.  */
+  caller = ftrace_find_caller (prev->up, mfun, fun);
+  if (caller != NULL)
+    {
+      /* The caller of PREV is the preceding btrace function segment in this
+	 function instance.  */
+      gdb_assert (caller->segment.next == NULL);
+
+      caller->segment.next = bfun;
+      bfun->segment.prev = caller;
+
+      /* Maintain the function level.  */
+      bfun->level = caller->level;
+
+      /* Maintain the call stack.  */
+      bfun->up = caller->up;
+      bfun->flags = caller->flags;
+
+      ftrace_debug (bfun, "new return");
+    }
+  else
+    {
+      /* We did not find a caller.  This could mean that something went
+	 wrong or that the call is simply not included in the trace.  */
+
+      /* Let's search for some actual call.  */
+      caller = ftrace_find_call (gdbarch, prev->up);
+      if (caller == NULL)
 	{
-	  DEBUG_FTRACE ("no symbol at %u, pc=%s", idx,
-			core_addr_to_string_nz (pc));
-	  continue;
-	}
+	  /* There is no call in PREV's back trace.  We assume that the
+	     branch trace did not include it.  */
+
+	  /* Let's find the topmost call function - this skips tail calls.  */
+	  while (prev->up != NULL)
+	    prev = prev->up;
+
+	  /* We maintain levels for a series of returns for which we have
+	     not seen the calls, but we restart at level 0, otherwise.  */
+	  bfun->level = min (0, prev->level) - 1;
+
+	  /* Fix up the call stack for PREV.  */
+	  ftrace_fixup_caller (prev, bfun, BFUN_UP_LINKS_TO_RET);
 
-      /* If we're switching functions, we start over.  */
-      if (ftrace_function_switched (bfun, mfun.minsym, fun))
+	  ftrace_debug (bfun, "new return - no caller");
+	}
+      else
 	{
-	  bfun = VEC_safe_push (btrace_func_s, ftrace, NULL);
+	  /* There is a call in PREV's back trace to which we should have
+	     returned.  Let's remain at this level.  */
+	  bfun->level = prev->level;
 
-	  ftrace_init_func (bfun, mfun.minsym, fun, idx);
-	  ftrace_debug (bfun, "init");
+	  ftrace_debug (bfun, "new return - unknown caller");
 	}
+    }
+
+  return bfun;
+}
+
+/* Add a new function segment for a function switch.
+   PREV is the chronologically preceding function segment.
+   MFUN and FUN are the symbol information we have for this function.  */
+
+static struct btrace_function *
+ftrace_new_switch (struct btrace_function *prev,
+		   struct minimal_symbol *mfun,
+		   struct symbol *fun)
+{
+  struct btrace_function *bfun;
+
+  /* This is an unexplained function switch.  The call stack will likely
+     be wrong at this point.  */
+  bfun = ftrace_new_function (prev, mfun, fun);
 
-      /* Update the instruction range.  */
-      bfun->iend = idx;
-      ftrace_debug (bfun, "update insns");
+  /* We keep the function level.  */
+  bfun->level = prev->level;
+
+  ftrace_debug (bfun, "new switch");
+
+  return bfun;
+}
+
+/* Update BFUN with respect to the instruction at PC.  This may create new
+   function segments.
+   Return the chronologically latest function segment, never NULL.  */
+
+static struct btrace_function *
+ftrace_update_function (struct gdbarch *gdbarch,
+			struct btrace_function *bfun, CORE_ADDR pc)
+{
+  struct bound_minimal_symbol bmfun;
+  struct minimal_symbol *mfun;
+  struct symbol *fun;
+  struct btrace_insn *last;
+
+  /* Try to determine the function we're in.  We use both types of symbols
+     to avoid surprises when we sometimes get a full symbol and sometimes
+     only a minimal symbol.  */
+  fun = find_pc_function (pc);
+  bmfun = lookup_minimal_symbol_by_pc (pc);
+  mfun = bmfun.minsym;
+
+  if (fun == NULL && mfun == NULL)
+    DEBUG_FTRACE ("no symbol at %s", core_addr_to_string_nz (pc));
+
+  /* If we didn't have a function before, we create one.  */
+  if (bfun == NULL)
+    return ftrace_new_function (bfun, mfun, fun);
 
-      /* Let's see if we have source correlation, as well.  */
-      sal = find_pc_line (pc, 0);
-      if (sal.symtab == NULL || sal.line == 0)
+  /* Check the last instruction, if we have one.
+     We do this check first, since it allows us to fill in the call stack
+     links in addition to the normal flow links.  */
+  last = NULL;
+  if (!VEC_empty (btrace_insn_s, bfun->insn))
+    last = VEC_last (btrace_insn_s, bfun->insn);
+
+  if (last != NULL)
+    {
+      CORE_ADDR lpc;
+
+      lpc = last->pc;
+
+      /* Check for returns.  */
+      if (gdbarch_insn_is_ret (gdbarch, lpc))
+	return ftrace_new_return (gdbarch, bfun, mfun, fun);
+
+      /* Check for calls.  */
+      if (gdbarch_insn_is_call (gdbarch, lpc))
 	{
-	  DEBUG_FTRACE ("no lines at %u, pc=%s", idx,
-			core_addr_to_string_nz (pc));
-	  continue;
+	  int size;
+
+	  size = gdb_insn_length (gdbarch, lpc);
+
+	  /* Ignore calls to the next instruction.  They are used for PIC.  */
+	  if (lpc + size != pc)
+	    return ftrace_new_call (bfun, mfun, fun);
 	}
+    }
+
+  /* Check if we're switching functions for some other reason.  */
+  if (ftrace_function_switched (bfun, mfun, fun))
+    {
+      DEBUG_FTRACE ("switching from %s in %s at %s",
+		    ftrace_print_insn_addr (last),
+		    ftrace_print_function_name (bfun),
+		    ftrace_print_filename (bfun));
 
-      /* Check if we switched files.  This could happen if, say, a macro that
-	 is defined in another file is expanded here.  */
-      filename = symtab_to_fullname (sal.symtab);
-      if (ftrace_skip_file (bfun, filename))
+      if (last != NULL)
 	{
-	  DEBUG_FTRACE ("ignoring file at %u, pc=%s, file=%s", idx,
-			core_addr_to_string_nz (pc), filename);
-	  continue;
+	  CORE_ADDR start, lpc;
+
+	  /* If we have symbol information for our current location, use
+	     it to check that we jump to the start of a function.  */
+	  if (fun != NULL || mfun != NULL)
+	    start = get_pc_function_start (pc);
+	  else
+	    start = pc;
+
+	  lpc = last->pc;
+
+	  /* Jumps indicate optimized tail calls.  */
+	  if (start == pc && gdbarch_insn_is_jump (gdbarch, lpc))
+	    return ftrace_new_tailcall (bfun, mfun, fun);
 	}
 
-      /* Update the line range.  */
-      bfun->lbegin = min (bfun->lbegin, sal.line);
-      bfun->lend = max (bfun->lend, sal.line);
-      ftrace_debug (bfun, "update lines");
+      return ftrace_new_switch (bfun, mfun, fun);
+    }
+
+  return bfun;
+}
+
+/* Update BFUN's source correlation with respect to the instruction at PC.  */
+
+static void
+ftrace_update_lines (struct btrace_function *bfun, CORE_ADDR pc)
+{
+  struct symtab_and_line sal;
+  const char *fullname;
+
+  sal = find_pc_line (pc, 0);
+  if (sal.symtab == NULL || sal.line == 0)
+    {
+      DEBUG_FTRACE ("no lines at %s", core_addr_to_string_nz (pc));
+      return;
+    }
+
+  /* Check if we switched files.  This could happen if, say, a macro that
+     is defined in another file is expanded here.  */
+  fullname = symtab_to_fullname (sal.symtab);
+  if (ftrace_skip_file (bfun, fullname))
+    {
+      DEBUG_FTRACE ("ignoring file at %s, file=%s",
+		    core_addr_to_string_nz (pc), fullname);
+      return;
+    }
+
+  /* Update the line range.  */
+  bfun->lbegin = min (bfun->lbegin, sal.line);
+  bfun->lend = max (bfun->lend, sal.line);
+
+  if (record_debug > 1)
+    ftrace_debug (bfun, "update lines");
+}
+
+/* Add the instruction at PC to BFUN's instructions.  */
+
+static void
+ftrace_update_insns (struct btrace_function *bfun, CORE_ADDR pc)
+{
+  struct btrace_insn *insn;
+
+  insn = VEC_safe_push (btrace_insn_s, bfun->insn, NULL);
+  insn->pc = pc;
+
+  if (record_debug > 1)
+    ftrace_debug (bfun, "update insn");
+}
+
+/* Compute the function branch trace from a block branch trace BTRACE for
+   a thread given by BTINFO.  */
+
+static void
+btrace_compute_ftrace (struct btrace_thread_info *btinfo,
+		       VEC (btrace_block_s) *btrace)
+{
+  struct btrace_function *begin, *end;
+  struct gdbarch *gdbarch;
+  unsigned int blk;
+  int level;
+
+  DEBUG ("compute ftrace");
+
+  gdbarch = target_gdbarch ();
+  begin = NULL;
+  end = NULL;
+  level = INT_MAX;
+  blk = VEC_length (btrace_block_s, btrace);
+
+  while (blk != 0)
+    {
+      btrace_block_s *block;
+      CORE_ADDR pc;
+
+      blk -= 1;
+
+      block = VEC_index (btrace_block_s, btrace, blk);
+      pc = block->begin;
+
+      for (;;)
+	{
+	  int size;
+
+	  /* We should hit the end of the block.  Warn if we went too far.  */
+	  if (block->end < pc)
+	    {
+	      warning (_("Recorded trace may be corrupted around %s."),
+		       core_addr_to_string_nz (pc));
+	      break;
+	    }
+
+	  end = ftrace_update_function (gdbarch, end, pc);
+	  if (begin == NULL)
+	    begin = end;
+
+	  /* Maintain the function level offset.  */
+	  level = min (level, end->level);
+
+	  ftrace_update_insns (end, pc);
+	  ftrace_update_lines (end, pc);
+
+	  /* We're done once we pushed the instruction at the end.  */
+	  if (block->end == pc)
+	    break;
+
+	  size = gdb_insn_length (gdbarch, pc);
+
+	  /* Make sure we terminate if we fail to compute the size.  */
+	  if (size <= 0)
+	    {
+	      warning (_("Recorded trace may be incomplete around %s."),
+		       core_addr_to_string_nz (pc));
+	      break;
+	    }
+
+	  pc += size;
+	}
     }
 
-  return ftrace;
+  btinfo->begin = begin;
+  btinfo->end = end;
+
+  /* LEVEL is the minimal function level of all btrace function segments.
+     Define the global level offset to -LEVEL so all function levels are
+     normalized to start at zero.  */
+  btinfo->level = -level;
 }
 
 /* See btrace.h.  */
@@ -394,6 +719,7 @@ btrace_fetch (struct thread_info *tp)
 {
   struct btrace_thread_info *btinfo;
   VEC (btrace_block_s) *btrace;
+  struct cleanup *cleanup;
 
   DEBUG ("fetch thread %d (%s)", tp->num, target_pid_to_str (tp->ptid));
 
@@ -402,18 +728,15 @@ btrace_fetch (struct thread_info *tp)
     return;
 
   btrace = target_read_btrace (btinfo->target, btrace_read_new);
-  if (VEC_empty (btrace_block_s, btrace))
-    return;
-
-  btrace_clear (tp);
+  cleanup = make_cleanup (VEC_cleanup (btrace_block_s), &btrace);
 
-  btinfo->btrace = btrace;
-  btinfo->itrace = compute_itrace (btinfo->btrace);
-  btinfo->ftrace = compute_ftrace (btinfo->itrace);
+  if (!VEC_empty (btrace_block_s, btrace))
+    {
+      btrace_clear (tp);
+      btrace_compute_ftrace (btinfo, btrace);
+    }
 
-  /* Initialize branch trace iterators.  */
-  btrace_init_insn_iterator (btinfo);
-  btrace_init_func_iterator (btinfo);
+  do_cleanups (cleanup);
 }
 
 /* See btrace.h.  */
@@ -422,18 +745,29 @@ void
 btrace_clear (struct thread_info *tp)
 {
   struct btrace_thread_info *btinfo;
+  struct btrace_function *it, *trash;
 
   DEBUG ("clear thread %d (%s)", tp->num, target_pid_to_str (tp->ptid));
 
   btinfo = &tp->btrace;
 
-  VEC_free (btrace_block_s, btinfo->btrace);
-  VEC_free (btrace_inst_s, btinfo->itrace);
-  VEC_free (btrace_func_s, btinfo->ftrace);
+  it = btinfo->begin;
+  while (it != NULL)
+    {
+      trash = it;
+      it = it->flow.next;
+
+      xfree (trash);
+    }
+
+  btinfo->begin = NULL;
+  btinfo->end = NULL;
 
-  btinfo->btrace = NULL;
-  btinfo->itrace = NULL;
-  btinfo->ftrace = NULL;
+  xfree (btinfo->insn_history);
+  xfree (btinfo->call_history);
+
+  btinfo->insn_history = NULL;
+  btinfo->call_history = NULL;
 }
 
 /* See btrace.h.  */
@@ -541,3 +875,493 @@ parse_xml_btrace (const char *buffer)
 
   return btrace;
 }
+
+/* See btrace.h.  */
+
+const struct btrace_insn *
+btrace_insn_get (const struct btrace_insn_iterator *it)
+{
+  const struct btrace_function *bfun;
+  unsigned int index, end;
+
+  if (it == NULL)
+    return NULL;
+
+  index = it->index;
+  bfun = it->function;
+  if (bfun == NULL)
+    return NULL;
+
+  /* The index is within the bounds of this function's instruction vector.  */
+  end = VEC_length (btrace_insn_s, bfun->insn);
+  gdb_assert (0 < end);
+  gdb_assert (index < end);
+
+  return VEC_index (btrace_insn_s, bfun->insn, index);
+}
+
+/* See btrace.h.  */
+
+unsigned int
+btrace_insn_number (const struct btrace_insn_iterator *it)
+{
+  const struct btrace_function *bfun;
+
+  if (it == NULL)
+    return 0;
+
+  bfun = it->function;
+  if (bfun == NULL)
+    return 0;
+
+  return bfun->insn_offset + it->index;
+}
+
+/* See btrace.h.  */
+
+void
+btrace_insn_begin (struct btrace_insn_iterator *it,
+		   const struct btrace_thread_info *btinfo)
+{
+  const struct btrace_function *bfun;
+
+  bfun = btinfo->begin;
+  if (bfun == NULL)
+    error (_("No trace."));
+
+  it->function = bfun;
+  it->index = 0;
+}
+
+/* See btrace.h.  */
+
+void
+btrace_insn_end (struct btrace_insn_iterator *it,
+		 const struct btrace_thread_info *btinfo)
+{
+  const struct btrace_function *bfun;
+  unsigned int length;
+
+  bfun = btinfo->end;
+  if (bfun == NULL)
+    error (_("No trace."));
+
+  /* The last instruction in the last function is the current instruction.
+     We point to it - it is one past the end of the execution trace.  */
+  length = VEC_length (btrace_insn_s, bfun->insn);
+
+  it->function = bfun;
+  it->index = length - 1;
+}
+
+/* See btrace.h.  */
+
+unsigned int
+btrace_insn_next (struct btrace_insn_iterator *it, unsigned int stride)
+{
+  const struct btrace_function *bfun;
+  unsigned int index, steps;
+
+  if (it == NULL)
+    return 0;
+
+  bfun = it->function;
+  if (bfun == NULL)
+    return 0;
+
+  steps = 0;
+  index = it->index;
+
+  while (stride != 0)
+    {
+      unsigned int end, space, adv;
+
+      end = VEC_length (btrace_insn_s, bfun->insn);
+
+      gdb_assert (0 < end);
+      gdb_assert (index < end);
+
+      /* Compute the number of instructions remaining in this segment.  */
+      space = end - index;
+
+      /* Advance the iterator as far as possible within this segment.  */
+      adv = min (space, stride);
+      stride -= adv;
+      index += adv;
+      steps += adv;
+
+      /* Move to the next function if we're at the end of this one.  */
+      if (index == end)
+	{
+	  const struct btrace_function *next;
+
+	  next = bfun->flow.next;
+	  if (next == NULL)
+	    {
+	      /* We stepped past the last function.
+
+		 Let's adjust the index to point to the last instruction in
+		 the previous function.  */
+	      index -= 1;
+	      steps -= 1;
+	      break;
+	    }
+
+	  /* We now point to the first instruction in the new function.  */
+	  bfun = next;
+	  index = 0;
+	}
+
+      /* We did make progress.  */
+      gdb_assert (adv > 0);
+    }
+
+  /* Update the iterator.  */
+  it->function = bfun;
+  it->index = index;
+
+  return steps;
+}
+
+/* See btrace.h.  */
+
+unsigned int
+btrace_insn_prev (struct btrace_insn_iterator *it, unsigned int stride)
+{
+  const struct btrace_function *bfun;
+  unsigned int index, steps;
+
+  if (it == NULL)
+    return 0;
+
+  bfun = it->function;
+  if (bfun == NULL)
+    return 0;
+
+  steps = 0;
+  index = it->index;
+
+  while (stride != 0)
+    {
+      unsigned int adv;
+
+      /* Move to the previous function if we're at the start of this one.  */
+      if (index == 0)
+	{
+	  const struct btrace_function *prev;
+
+	  prev = bfun->flow.prev;
+	  if (prev == NULL)
+	    break;
+
+	  /* We point to one after the last instruction in the new function.  */
+	  bfun = prev;
+	  index = VEC_length (btrace_insn_s, bfun->insn);
+
+	  /* There is at least one instruction in this function segment.  */
+	  gdb_assert (index > 0);
+	}
+
+      /* Advance the iterator as far as possible within this segment.  */
+      adv = min (index, stride);
+      stride -= adv;
+      index -= adv;
+      steps += adv;
+
+      /* We did make progress.  */
+      gdb_assert (adv > 0);
+    }
+
+  /* Update the iterator.  */
+  it->function = bfun;
+  it->index = index;
+
+  return steps;
+}
+
+/* See btrace.h.  */
+
+int
+btrace_insn_cmp (const struct btrace_insn_iterator *lhs,
+		 const struct btrace_insn_iterator *rhs)
+{
+  unsigned int lnum, rnum;
+
+  lnum = btrace_insn_number (lhs);
+  rnum = btrace_insn_number (rhs);
+
+  return (int) (lnum - rnum);
+}
+
+/* See btrace.h.  */
+
+int
+btrace_find_insn_by_number (struct btrace_insn_iterator *it,
+			    const struct btrace_thread_info *btinfo,
+			    unsigned int number)
+{
+  const struct btrace_function *bfun;
+  unsigned int end;
+
+  for (bfun = btinfo->end; bfun != NULL; bfun = bfun->flow.prev)
+    if (bfun->insn_offset <= number)
+      break;
+
+  if (bfun == NULL)
+    return 0;
+
+  end = bfun->insn_offset + VEC_length (btrace_insn_s, bfun->insn);
+  if (end <= number)
+    return 0;
+
+  it->function = bfun;
+  it->index = number - bfun->insn_offset;
+
+  return 1;
+}
+
+/* See btrace.h.  */
+
+const struct btrace_function *
+btrace_call_get (const struct btrace_call_iterator *it)
+{
+  if (it == NULL)
+    return NULL;
+
+  return it->function;
+}
+
+/* See btrace.h.  */
+
+unsigned int
+btrace_call_number (const struct btrace_call_iterator *it)
+{
+  const struct btrace_thread_info *btinfo;
+  const struct btrace_function *bfun;
+  unsigned int insns;
+
+  if (it == NULL)
+    return 0;
+
+  btinfo = it->btinfo;
+  if (btinfo == NULL)
+    return 0;
+
+  bfun = it->function;
+  if (bfun != NULL)
+    return bfun->number;
+
+  /* For the end iterator, i.e. bfun == NULL, we return one more than the
+     number of the last function.  */
+  bfun = btinfo->end;
+  insns = VEC_length (btrace_insn_s, bfun->insn);
+
+  /* If the function contains only a single instruction (i.e. the current
+     instruction), it will be skipped and its number is already the number
+     we seek.  */
+  if (insns == 1)
+    return bfun->number;
+
+  /* Otherwise, return one more than the number of the last function.  */
+  return bfun->number + 1;
+}
+
+/* See btrace.h.  */
+
+void
+btrace_call_begin (struct btrace_call_iterator *it,
+		   const struct btrace_thread_info *btinfo)
+{
+  const struct btrace_function *bfun;
+
+  bfun = btinfo->begin;
+  if (bfun == NULL)
+    error (_("No trace."));
+
+  it->btinfo = btinfo;
+  it->function = bfun;
+}
+
+/* See btrace.h.  */
+
+void
+btrace_call_end (struct btrace_call_iterator *it,
+		 const struct btrace_thread_info *btinfo)
+{
+  const struct btrace_function *bfun;
+
+  bfun = btinfo->end;
+  if (bfun == NULL)
+    error (_("No trace."));
+
+  it->btinfo = btinfo;
+  it->function = NULL;
+}
+
+/* See btrace.h.  */
+
+unsigned int
+btrace_call_next (struct btrace_call_iterator *it, unsigned int stride)
+{
+  const struct btrace_function *bfun;
+  unsigned int steps;
+
+  if (it == NULL)
+    return 0;
+
+  bfun = it->function;
+  steps = 0;
+  while (bfun != NULL)
+    {
+      const struct btrace_function *next;
+      unsigned int insns;
+
+      next = bfun->flow.next;
+      if (next == NULL)
+	{
+	  /* Ignore the last function if it only contains a single
+	     (i.e. the current) instruction.  */
+	  insns = VEC_length (btrace_insn_s, bfun->insn);
+	  if (insns == 1)
+	    steps -= 1;
+	}
+
+      if (stride == steps)
+	break;
+
+      bfun = next;
+      steps += 1;
+    }
+
+  it->function = bfun;
+  return steps;
+}
+
+/* See btrace.h.  */
+
+unsigned int
+btrace_call_prev (struct btrace_call_iterator *it, unsigned int stride)
+{
+  const struct btrace_thread_info *btinfo;
+  const struct btrace_function *bfun;
+  unsigned int steps;
+
+  if (it == NULL)
+    return 0;
+
+  bfun = it->function;
+  steps = 0;
+
+  if (bfun == NULL)
+    {
+      unsigned int insns;
+
+      btinfo = it->btinfo;
+      if (btinfo == NULL)
+	return 0;
+
+      bfun = btinfo->end;
+      if (bfun == NULL)
+	return 0;
+
+      /* Ignore the last function if it only contains a single
+	 (i.e. the current) instruction.  */
+      insns = VEC_length (btrace_insn_s, bfun->insn);
+      if (insns == 1)
+	bfun = bfun->flow.prev;
+
+      if (bfun == NULL)
+	return 0;
+
+      steps += 1;
+    }
+
+  while (steps < stride)
+    {
+      const struct btrace_function *prev;
+
+      prev = bfun->flow.prev;
+      if (prev == NULL)
+	break;
+
+      bfun = prev;
+      steps += 1;
+    }
+
+  it->function = bfun;
+  return steps;
+}
+
+/* See btrace.h.  */
+
+int
+btrace_call_cmp (const struct btrace_call_iterator *lhs,
+		 const struct btrace_call_iterator *rhs)
+{
+  unsigned int lnum, rnum;
+
+  lnum = btrace_call_number (lhs);
+  rnum = btrace_call_number (rhs);
+
+  return (int) (lnum - rnum);
+}
+
+/* See btrace.h.  */
+
+int
+btrace_find_call_by_number (struct btrace_call_iterator *it,
+			    const struct btrace_thread_info *btinfo,
+			    unsigned int number)
+{
+  const struct btrace_function *bfun;
+
+  if (btinfo == NULL)
+    return 0;
+
+  for (bfun = btinfo->end; bfun != NULL; bfun = bfun->flow.prev)
+    {
+      unsigned int bnum;
+
+      bnum = bfun->number;
+      if (number == bnum)
+	{
+	  it->btinfo = btinfo;
+	  it->function = bfun;
+	  return 1;
+	}
+
+      /* Functions are ordered and numbered consecutively.  We could bail out
+	 earlier.  On the other hand, it is very unlikely that we search for
+	 a nonexistent function.  */
+  }
+
+  return 0;
+}
+
+/* See btrace.h.  */
+
+void
+btrace_set_insn_history (struct btrace_thread_info *btinfo,
+			 const struct btrace_insn_iterator *begin,
+			 const struct btrace_insn_iterator *end)
+{
+  if (btinfo->insn_history == NULL)
+    btinfo->insn_history = xzalloc (sizeof (*btinfo->insn_history));
+
+  btinfo->insn_history->begin = *begin;
+  btinfo->insn_history->end = *end;
+}
+
+/* See btrace.h.  */
+
+void
+btrace_set_call_history (struct btrace_thread_info *btinfo,
+			 const struct btrace_call_iterator *begin,
+			 const struct btrace_call_iterator *end)
+{
+  if (btinfo->call_history == NULL)
+    btinfo->call_history = xzalloc (sizeof (*btinfo->call_history));
+
+  btinfo->call_history->begin = *begin;
+  btinfo->call_history->end = *end;
+}
diff --git a/gdb/btrace.h b/gdb/btrace.h
index bd8425d..a3322d2 100644
--- a/gdb/btrace.h
+++ b/gdb/btrace.h
@@ -29,63 +29,124 @@
 #include "btrace-common.h"
 
 struct thread_info;
+struct btrace_function;
 
 /* A branch trace instruction.
 
    This represents a single instruction in a branch trace.  */
-struct btrace_inst
+struct btrace_insn
 {
   /* The address of this instruction.  */
   CORE_ADDR pc;
 };
 
-/* A branch trace function.
+/* A vector of branch trace instructions.  */
+typedef struct btrace_insn btrace_insn_s;
+DEF_VEC_O (btrace_insn_s);
+
+/* A doubly-linked list of branch trace function segments.  */
+struct btrace_func_link
+{
+  struct btrace_function *prev;
+  struct btrace_function *next;
+};
+
+/* Flags for btrace function segments.  */
+enum btrace_function_flag
+{
+  /* The 'up' link interpretation.
+     If set, it points to the function segment we returned to.
+     If clear, it points to the function segment we called from.  */
+  BFUN_UP_LINKS_TO_RET = (1 << 0),
+
+  /* The 'up' link points to a tail call.  This obviously only makes sense
+     if bfun_up_links_to_ret is clear.  */
+  BFUN_UP_LINKS_TO_TAILCALL = (1 << 1)
+};
+
+/* A branch trace function segment.
 
    This represents a function segment in a branch trace, i.e. a consecutive
-   number of instructions belonging to the same function.  */
-struct btrace_func
+   number of instructions belonging to the same function.
+
+   We do not allow function segments without any instructions.  */
+struct btrace_function
 {
-  /* The full and minimal symbol for the function.  One of them may be NULL.  */
+  /* The full and minimal symbol for the function.  Both may be NULL.  */
   struct minimal_symbol *msym;
   struct symbol *sym;
 
+  /* The previous and next segment belonging to the same function.  */
+  struct btrace_func_link segment;
+
+  /* The previous and next function in control flow order.  */
+  struct btrace_func_link flow;
+
+  /* The directly preceding function segment in a (fake) call stack.  */
+  struct btrace_function *up;
+
+  /* The instructions in this function segment.  */
+  VEC (btrace_insn_s) *insn;
+
+  /* The instruction number offset for the first instruction in this
+     function segment.  */
+  unsigned int insn_offset;
+
+  /* The function number in control-flow order.  */
+  unsigned int number;
+
+  /* The function level in a back trace across the entire branch trace.
+     A caller's level is one higher than the level of its callee.
+
+     Levels can be negative if we see returns for which we have not seen
+     the corresponding calls.  The branch trace thread information provides
+     a fixup to normalize function levels so the smallest level is zero.  */
+  int level;
+
   /* The source line range of this function segment (both inclusive).  */
   int lbegin, lend;
 
-  /* The instruction number range in the instruction trace corresponding
-     to this function segment (both inclusive).  */
-  unsigned int ibegin, iend;
+  /* A bit-vector of btrace_function_flag.  */
+  unsigned int flags;
 };
 
-/* Branch trace may also be represented as a vector of:
+/* A branch trace instruction iterator.  */
+struct btrace_insn_iterator
+{
+  /* The branch trace function segment containing the instruction.  */
+  const struct btrace_function *function;
 
-   - branch trace instructions starting with the oldest instruction.
-   - branch trace functions starting with the oldest function.  */
-typedef struct btrace_inst btrace_inst_s;
-typedef struct btrace_func btrace_func_s;
+  /* The index into the function segment's instruction vector.  */
+  unsigned int index;
+};
 
-/* Define functions operating on branch trace vectors.  */
-DEF_VEC_O (btrace_inst_s);
-DEF_VEC_O (btrace_func_s);
+/* A branch trace function call iterator.  */
+struct btrace_call_iterator
+{
+  /* The branch trace information for this thread.  */
+  const struct btrace_thread_info *btinfo;
+
+  /* The branch trace function segment.
+     This will be NULL for the iterator pointing to the end of the trace.  */
+  const struct btrace_function *function;
+};
 
 /* Branch trace iteration state for "record instruction-history".  */
-struct btrace_insn_iterator
+struct btrace_insn_history
 {
-  /* The instruction index range from begin (inclusive) to end (exclusive)
-     that has been covered last time.
-     If end < begin, the branch trace has just been updated.  */
-  unsigned int begin;
-  unsigned int end;
+  /* The branch trace instruction range from begin (inclusive) to
+     end (exclusive) that has been covered last time.  */
+  struct btrace_insn_iterator begin;
+  struct btrace_insn_iterator end;
 };
 
 /* Branch trace iteration state for "record function-call-history".  */
-struct btrace_func_iterator
+struct btrace_call_history
 {
-  /* The function index range from begin (inclusive) to end (exclusive)
-     that has been covered last time.
-     If end < begin, the branch trace has just been updated.  */
-  unsigned int begin;
-  unsigned int end;
+  /* The branch trace function range from begin (inclusive) to end (exclusive)
+     that has been covered last time.  */
+  struct btrace_call_iterator begin;
+  struct btrace_call_iterator end;
 };
 
 /* Branch trace information per thread.
@@ -103,16 +164,23 @@ struct btrace_thread_info
      the underlying architecture.  */
   struct btrace_target_info *target;
 
-  /* The current branch trace for this thread.  */
-  VEC (btrace_block_s) *btrace;
-  VEC (btrace_inst_s) *itrace;
-  VEC (btrace_func_s) *ftrace;
+  /* The current branch trace for this thread (both inclusive).
+
+     The last instruction of END is the current instruction, which is not
+     part of the execution history.  */
+  struct btrace_function *begin;
+  struct btrace_function *end;
+
+  /* The function level offset.  When added to each function's level,
+     this normalizes the function levels such that the smallest level
+     becomes zero.  */
+  int level;
 
   /* The instruction history iterator.  */
-  struct btrace_insn_iterator insn_iterator;
+  struct btrace_insn_history *insn_history;
 
   /* The function call history iterator.  */
-  struct btrace_func_iterator func_iterator;
+  struct btrace_call_history *call_history;
 };
 
 /* Enable branch tracing for a thread.  */
@@ -139,4 +207,98 @@ extern void btrace_free_objfile (struct objfile *);
 /* Parse a branch trace xml document into a block vector.  */
 extern VEC (btrace_block_s) *parse_xml_btrace (const char*);
 
+/* Dereference a branch trace instruction iterator.  Return a pointer to the
+   instruction the iterator points to or NULL if the interator does not point
+   to a valid instruction.  */
+extern const struct btrace_insn *
+ btrace_insn_get (const struct btrace_insn_iterator *);
+
+/* Return the instruction number for a branch trace iterator.
+   Returns one past the maximum instruction number for the end iterator.
+   Returns zero if the iterator does not point to a valid instruction.  */
+extern unsigned int btrace_insn_number (const struct btrace_insn_iterator *);
+
+/* Initialize a branch trace instruction iterator to point to the begin/end of
+   the branch trace.  Throws an error if there is no branch trace.  */
+extern void btrace_insn_begin (struct btrace_insn_iterator *,
+			       const struct btrace_thread_info *);
+extern void btrace_insn_end (struct btrace_insn_iterator *,
+			     const struct btrace_thread_info *);
+
+/* Increment/decrement a branch trace instruction iterator.  Return the number
+   of instructions by which the instruction iterator has been advanced.
+   Returns zero, if the operation failed.  */
+extern unsigned int btrace_insn_next (struct btrace_insn_iterator *,
+				      unsigned int stride);
+extern unsigned int btrace_insn_prev (struct btrace_insn_iterator *,
+				      unsigned int stride);
+
+/* Compare two branch trace instruction iterators.
+   Return a negative number if LHS < RHS.
+   Return zero if LHS == RHS.
+   Return a positive number if LHS > RHS.  */
+extern int btrace_insn_cmp (const struct btrace_insn_iterator *lhs,
+			    const struct btrace_insn_iterator *rhs);
+
+/* Find an instruction in the function branch trace by its number.
+   If the instruction is found, initialize the branch trace instruction
+   iterator to point to this instruction and return non-zero.
+   Return zero, otherwise.  */
+extern int btrace_find_insn_by_number (struct btrace_insn_iterator *,
+				       const struct btrace_thread_info *,
+				       unsigned int number);
+
+/* Dereference a branch trace call iterator.  Return a pointer to the
+   function the iterator points to or NULL if the interator points past
+   the end of the branch trace.  */
+extern const struct btrace_function *
+ btrace_call_get (const struct btrace_call_iterator *);
+
+/* Return the function number for a branch trace call iterator.
+   Returns one past the maximum function number for the end iterator.
+   Returns zero if the iterator does not point to a valid function.  */
+extern unsigned int btrace_call_number (const struct btrace_call_iterator *);
+
+/* Initialize a branch trace call iterator to point to the begin/end of
+   the branch trace.  Throws an error if there is no branch trace.  */
+extern void btrace_call_begin (struct btrace_call_iterator *,
+			       const struct btrace_thread_info *);
+extern void btrace_call_end (struct btrace_call_iterator *,
+			     const struct btrace_thread_info *);
+
+/* Increment/decrement a branch trace call  iterator.  Return the number
+   of function segments s by which the call iterator has been advanced.
+   Returns zero, if the operation failed.  */
+extern unsigned int btrace_call_next (struct btrace_call_iterator *,
+				      unsigned int stride);
+extern unsigned int btrace_call_prev (struct btrace_call_iterator *,
+				      unsigned int stride);
+
+/* Compare two branch trace call iterators.
+   Return a negative number if LHS < RHS.
+   Return zero if LHS == RHS.
+   Return a positive number if LHS > RHS.  */
+extern int btrace_call_cmp (const struct btrace_call_iterator *lhs,
+			    const struct btrace_call_iterator *rhs);
+
+/* Find a function in the function branch trace by its number.
+   If the function is found, initialize the branch trace call
+   iterator to point to this function and return non-zero.
+   Return zero, otherwise.  */
+extern int btrace_find_call_by_number (struct btrace_call_iterator *,
+				       const struct btrace_thread_info *,
+				       unsigned int number);
+
+/* Set the branch trace instruction history from BEGIN (inclusive) to
+   END (exclusive).  */
+extern void btrace_set_insn_history (struct btrace_thread_info *,
+				     const struct btrace_insn_iterator *begin,
+				     const struct btrace_insn_iterator *end);
+
+/* Set the branch trace function call history from BEGIN (inclusive) to
+   END (exclusive).  */
+extern void btrace_set_call_history (struct btrace_thread_info *,
+				     const struct btrace_call_iterator *begin,
+				     const struct btrace_call_iterator *end);
+
 #endif /* BTRACE_H */
diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
index 68f40c8..2e7c639 100644
--- a/gdb/record-btrace.c
+++ b/gdb/record-btrace.c
@@ -74,7 +74,7 @@ require_btrace (void)
 
   btinfo = &tp->btrace;
 
-  if (VEC_empty (btrace_inst_s, btinfo->itrace))
+  if (btinfo->begin == NULL)
     error (_("No trace."));
 
   return btinfo;
@@ -206,7 +206,7 @@ record_btrace_info (void)
 {
   struct btrace_thread_info *btinfo;
   struct thread_info *tp;
-  unsigned int insts, funcs;
+  unsigned int insns, calls;
 
   DEBUG ("info");
 
@@ -216,12 +216,26 @@ record_btrace_info (void)
 
   btrace_fetch (tp);
 
+  insns = 0;
+  calls = 0;
+
   btinfo = &tp->btrace;
-  insts = VEC_length (btrace_inst_s, btinfo->itrace);
-  funcs = VEC_length (btrace_func_s, btinfo->ftrace);
+  if (btinfo->begin != NULL)
+    {
+      struct btrace_call_iterator call;
+      struct btrace_insn_iterator insn;
+
+      btrace_call_end (&call, btinfo);
+      btrace_call_prev (&call, 1);
+      calls = btrace_call_number (&call) + 1;
+
+      btrace_insn_end (&insn, btinfo);
+      btrace_insn_prev (&insn, 1);
+      insns = btrace_insn_number (&insn) + 1;
+    }
 
   printf_unfiltered (_("Recorded %u instructions in %u functions for thread "
-		       "%d (%s).\n"), insts, funcs, tp->num,
+		       "%d (%s).\n"), insns, calls, tp->num,
 		     target_pid_to_str (tp->ptid));
 }
 
@@ -236,27 +250,31 @@ ui_out_field_uint (struct ui_out *uiout, const char *fld, unsigned int val)
 /* Disassemble a section of the recorded instruction trace.  */
 
 static void
-btrace_insn_history (struct btrace_thread_info *btinfo, struct ui_out *uiout,
-		     unsigned int begin, unsigned int end, int flags)
+btrace_insn_history (struct ui_out *uiout,
+		     const struct btrace_insn_iterator *begin,
+		     const struct btrace_insn_iterator *end, int flags)
 {
   struct gdbarch *gdbarch;
-  struct btrace_inst *inst;
-  unsigned int idx;
+  struct btrace_insn_iterator it;
 
-  DEBUG ("itrace (0x%x): [%u; %u[", flags, begin, end);
+  DEBUG ("itrace (0x%x): [%u; %u)", flags, btrace_insn_number (begin),
+	 btrace_insn_number (end));
 
   gdbarch = target_gdbarch ();
 
-  for (idx = begin; VEC_iterate (btrace_inst_s, btinfo->itrace, idx, inst)
-	 && idx < end; ++idx)
+  for (it = *begin; btrace_insn_cmp (&it, end) != 0; btrace_insn_next (&it, 1))
     {
+      const struct btrace_insn *insn;
+
+      insn = btrace_insn_get (&it);
+
       /* Print the instruction index.  */
-      ui_out_field_uint (uiout, "index", idx);
+      ui_out_field_uint (uiout, "index", btrace_insn_number (&it));
       ui_out_text (uiout, "\t");
 
       /* Disassembly with '/m' flag may not produce the expected result.
 	 See PR gdb/11833.  */
-      gdb_disassembly (gdbarch, uiout, NULL, flags, 1, inst->pc, inst->pc + 1);
+      gdb_disassembly (gdbarch, uiout, NULL, flags, 1, insn->pc, insn->pc + 1);
     }
 }
 
@@ -266,72 +284,62 @@ static void
 record_btrace_insn_history (int size, int flags)
 {
   struct btrace_thread_info *btinfo;
+  struct btrace_insn_history *history;
+  struct btrace_insn_iterator begin, end;
   struct cleanup *uiout_cleanup;
   struct ui_out *uiout;
-  unsigned int context, last, begin, end;
+  unsigned int context, covered;
 
   uiout = current_uiout;
   uiout_cleanup = make_cleanup_ui_out_tuple_begin_end (uiout,
 						       "insn history");
-  btinfo = require_btrace ();
-  last = VEC_length (btrace_inst_s, btinfo->itrace);
-
   context = abs (size);
-  begin = btinfo->insn_iterator.begin;
-  end = btinfo->insn_iterator.end;
-
-  DEBUG ("insn-history (0x%x): %d, prev: [%u; %u[", flags, size, begin, end);
-
   if (context == 0)
     error (_("Bad record instruction-history-size."));
 
-  /* We start at the end.  */
-  if (end < begin)
-    {
-      /* Truncate the context, if necessary.  */
-      context = min (context, last);
-
-      end = last;
-      begin = end - context;
-    }
-  else if (size < 0)
+  btinfo = require_btrace ();
+  history = btinfo->insn_history;
+  if (history == NULL)
     {
-      if (begin == 0)
-	{
-	  printf_unfiltered (_("At the start of the branch trace record.\n"));
-
-	  btinfo->insn_iterator.end = 0;
-	  return;
-	}
+      /* No matter the direction, we start with the tail of the trace.  */
+      btrace_insn_end (&begin, btinfo);
+      end = begin;
 
-      /* Truncate the context, if necessary.  */
-      context = min (context, begin);
+      DEBUG ("insn-history (0x%x): %d", flags, size);
 
-      end = begin;
-      begin -= context;
+      covered = btrace_insn_prev (&begin, context);
     }
   else
     {
-      if (end == last)
-	{
-	  printf_unfiltered (_("At the end of the branch trace record.\n"));
+      begin = history->begin;
+      end = history->end;
 
-	  btinfo->insn_iterator.begin = last;
-	  return;
-	}
+      DEBUG ("insn-history (0x%x): %d, prev: [%u; %u)", flags, size,
+	     btrace_insn_number (&begin), btrace_insn_number (&end));
 
-      /* Truncate the context, if necessary.  */
-      context = min (context, last - end);
-
-      begin = end;
-      end += context;
+      if (size < 0)
+	{
+	  end = begin;
+	  covered = btrace_insn_prev (&begin, context);
+	}
+      else
+	{
+	  begin = end;
+	  covered = btrace_insn_next (&end, context);
+	}
     }
 
-  btrace_insn_history (btinfo, uiout, begin, end, flags);
-
-  btinfo->insn_iterator.begin = begin;
-  btinfo->insn_iterator.end = end;
+  if (covered > 0)
+    btrace_insn_history (uiout, &begin, &end, flags);
+  else
+    {
+      if (size < 0)
+	printf_unfiltered (_("At the start of the branch trace record.\n"));
+      else
+	printf_unfiltered (_("At the end of the branch trace record.\n"));
+    }
 
+  btrace_set_insn_history (btinfo, &begin, &end);
   do_cleanups (uiout_cleanup);
 }
 
@@ -341,39 +349,41 @@ static void
 record_btrace_insn_history_range (ULONGEST from, ULONGEST to, int flags)
 {
   struct btrace_thread_info *btinfo;
+  struct btrace_insn_history *history;
+  struct btrace_insn_iterator begin, end;
   struct cleanup *uiout_cleanup;
   struct ui_out *uiout;
-  unsigned int last, begin, end;
+  unsigned int low, high;
+  int found;
 
   uiout = current_uiout;
   uiout_cleanup = make_cleanup_ui_out_tuple_begin_end (uiout,
 						       "insn history");
-  btinfo = require_btrace ();
-  last = VEC_length (btrace_inst_s, btinfo->itrace);
+  low = (unsigned int) from;
+  high = (unsigned int) to;
 
-  begin = (unsigned int) from;
-  end = (unsigned int) to;
-
-  DEBUG ("insn-history (0x%x): [%u; %u[", flags, begin, end);
+  DEBUG ("insn-history (0x%x): [%u; %u)", flags, low, high);
 
   /* Check for wrap-arounds.  */
-  if (begin != from || end != to)
+  if (low != from || high != to)
     error (_("Bad range."));
 
-  if (end <= begin)
+  if (high <= low)
     error (_("Bad range."));
 
-  if (last <= begin)
-    error (_("Range out of bounds."));
+  btinfo = require_btrace ();
 
-  /* Truncate the range, if necessary.  */
-  if (last < end)
-    end = last;
+  found = btrace_find_insn_by_number (&begin, btinfo, low);
+  if (found == 0)
+    error (_("Range out of bounds."));
 
-  btrace_insn_history (btinfo, uiout, begin, end, flags);
+  /* Silently truncate the range, if necessary.  */
+  found = btrace_find_insn_by_number (&end, btinfo, high);
+  if (found == 0)
+    btrace_insn_end (&end, btinfo);
 
-  btinfo->insn_iterator.begin = begin;
-  btinfo->insn_iterator.end = end;
+  btrace_insn_history (uiout, &begin, &end, flags);
+  btrace_set_insn_history (btinfo, &begin, &end);
 
   do_cleanups (uiout_cleanup);
 }
@@ -412,23 +422,27 @@ record_btrace_insn_history_from (ULONGEST from, int size, int flags)
 /* Print the instruction number range for a function call history line.  */
 
 static void
-btrace_func_history_insn_range (struct ui_out *uiout, struct btrace_func *bfun)
+btrace_call_history_insn_range (struct ui_out *uiout,
+				const struct btrace_function *bfun)
 {
-  ui_out_field_uint (uiout, "insn begin", bfun->ibegin);
+  unsigned int begin, end;
 
-  if (bfun->ibegin == bfun->iend)
-    return;
+  begin = bfun->insn_offset;
+  end = begin + VEC_length (btrace_insn_s, bfun->insn);
 
+  ui_out_field_uint (uiout, "insn begin", begin);
   ui_out_text (uiout, "-");
-  ui_out_field_uint (uiout, "insn end", bfun->iend);
+  ui_out_field_uint (uiout, "insn end", end);
 }
 
 /* Print the source line information for a function call history line.  */
 
 static void
-btrace_func_history_src_line (struct ui_out *uiout, struct btrace_func *bfun)
+btrace_call_history_src_line (struct ui_out *uiout,
+			      const struct btrace_function *bfun)
 {
   struct symbol *sym;
+  int begin, end;
 
   sym = bfun->sym;
   if (sym == NULL)
@@ -437,54 +451,66 @@ btrace_func_history_src_line (struct ui_out *uiout, struct btrace_func *bfun)
   ui_out_field_string (uiout, "file",
 		       symtab_to_filename_for_display (sym->symtab));
 
-  if (bfun->lend == 0)
+  begin = bfun->lbegin;
+  end = bfun->lend;
+
+  if (end < begin)
     return;
 
   ui_out_text (uiout, ":");
-  ui_out_field_int (uiout, "min line", bfun->lbegin);
+  ui_out_field_int (uiout, "min line", begin);
 
-  if (bfun->lend == bfun->lbegin)
+  if (end == begin)
     return;
 
   ui_out_text (uiout, "-");
-  ui_out_field_int (uiout, "max line", bfun->lend);
+  ui_out_field_int (uiout, "max line", end);
 }
 
 /* Disassemble a section of the recorded function trace.  */
 
 static void
-btrace_func_history (struct btrace_thread_info *btinfo, struct ui_out *uiout,
-		     unsigned int begin, unsigned int end,
+btrace_call_history (struct ui_out *uiout,
+		     const struct btrace_call_iterator *begin,
+		     const struct btrace_call_iterator *end,
 		     enum record_print_flag flags)
 {
-  struct btrace_func *bfun;
-  unsigned int idx;
+  struct btrace_call_iterator it;
 
-  DEBUG ("ftrace (0x%x): [%u; %u[", flags, begin, end);
+  DEBUG ("ftrace (0x%x): [%u; %u)", flags, btrace_call_number (begin),
+	 btrace_call_number (end));
 
-  for (idx = begin; VEC_iterate (btrace_func_s, btinfo->ftrace, idx, bfun)
-	 && idx < end; ++idx)
+  for (it = *begin; btrace_call_cmp (&it, end) != 0; btrace_call_next (&it, 1))
     {
+      const struct btrace_function *bfun;
+      struct minimal_symbol *msym;
+      struct symbol *sym;
+
+      bfun = btrace_call_get (&it);
+      msym = bfun->msym;
+      sym = bfun->sym;
+
       /* Print the function index.  */
-      ui_out_field_uint (uiout, "index", idx);
+      ui_out_field_uint (uiout, "index", bfun->number);
       ui_out_text (uiout, "\t");
 
       if ((flags & RECORD_PRINT_INSN_RANGE) != 0)
 	{
-	  btrace_func_history_insn_range (uiout, bfun);
+	  btrace_call_history_insn_range (uiout, bfun);
 	  ui_out_text (uiout, "\t");
 	}
 
       if ((flags & RECORD_PRINT_SRC_LINE) != 0)
 	{
-	  btrace_func_history_src_line (uiout, bfun);
+	  btrace_call_history_src_line (uiout, bfun);
 	  ui_out_text (uiout, "\t");
 	}
 
-      if (bfun->sym != NULL)
-	ui_out_field_string (uiout, "function", SYMBOL_PRINT_NAME (bfun->sym));
-      else if (bfun->msym != NULL)
-	ui_out_field_string (uiout, "function", SYMBOL_PRINT_NAME (bfun->msym));
+      if (sym != NULL)
+	ui_out_field_string (uiout, "function", SYMBOL_PRINT_NAME (sym));
+      else if (msym != NULL)
+	ui_out_field_string (uiout, "function", SYMBOL_PRINT_NAME (msym));
+
       ui_out_text (uiout, "\n");
     }
 }
@@ -495,72 +521,62 @@ static void
 record_btrace_call_history (int size, int flags)
 {
   struct btrace_thread_info *btinfo;
+  struct btrace_call_history *history;
+  struct btrace_call_iterator begin, end;
   struct cleanup *uiout_cleanup;
   struct ui_out *uiout;
-  unsigned int context, last, begin, end;
+  unsigned int context, covered;
 
   uiout = current_uiout;
   uiout_cleanup = make_cleanup_ui_out_tuple_begin_end (uiout,
 						       "insn history");
-  btinfo = require_btrace ();
-  last = VEC_length (btrace_func_s, btinfo->ftrace);
-
   context = abs (size);
-  begin = btinfo->func_iterator.begin;
-  end = btinfo->func_iterator.end;
-
-  DEBUG ("func-history (0x%x): %d, prev: [%u; %u[", flags, size, begin, end);
-
   if (context == 0)
     error (_("Bad record function-call-history-size."));
 
-  /* We start at the end.  */
-  if (end < begin)
-    {
-      /* Truncate the context, if necessary.  */
-      context = min (context, last);
-
-      end = last;
-      begin = end - context;
-    }
-  else if (size < 0)
+  btinfo = require_btrace ();
+  history = btinfo->call_history;
+  if (history == NULL)
     {
-      if (begin == 0)
-	{
-	  printf_unfiltered (_("At the start of the branch trace record.\n"));
-
-	  btinfo->func_iterator.end = 0;
-	  return;
-	}
+      /* No matter the direction, we start with the tail of the trace.  */
+      btrace_call_end (&begin, btinfo);
+      end = begin;
 
-      /* Truncate the context, if necessary.  */
-      context = min (context, begin);
+      DEBUG ("call-history (0x%x): %d", flags, size);
 
-      end = begin;
-      begin -= context;
+      covered = btrace_call_prev (&begin, context);
     }
   else
     {
-      if (end == last)
-	{
-	  printf_unfiltered (_("At the end of the branch trace record.\n"));
+      begin = history->begin;
+      end = history->end;
 
-	  btinfo->func_iterator.begin = last;
-	  return;
-	}
+      DEBUG ("call-history (0x%x): %d, prev: [%u; %u)", flags, size,
+	     btrace_call_number (&begin), btrace_call_number (&end));
 
-      /* Truncate the context, if necessary.  */
-      context = min (context, last - end);
-
-      begin = end;
-      end += context;
+      if (size < 0)
+	{
+	  end = begin;
+	  covered = btrace_call_prev (&begin, context);
+	}
+      else
+	{
+	  begin = end;
+	  covered = btrace_call_next (&end, context);
+	}
     }
 
-  btrace_func_history (btinfo, uiout, begin, end, flags);
-
-  btinfo->func_iterator.begin = begin;
-  btinfo->func_iterator.end = end;
+  if (covered > 0)
+    btrace_call_history (uiout, &begin, &end, flags);
+  else
+    {
+      if (size < 0)
+	printf_unfiltered (_("At the start of the branch trace record.\n"));
+      else
+	printf_unfiltered (_("At the end of the branch trace record.\n"));
+    }
 
+  btrace_set_call_history (btinfo, &begin, &end);
   do_cleanups (uiout_cleanup);
 }
 
@@ -570,39 +586,41 @@ static void
 record_btrace_call_history_range (ULONGEST from, ULONGEST to, int flags)
 {
   struct btrace_thread_info *btinfo;
+  struct btrace_call_history *history;
+  struct btrace_call_iterator begin, end;
   struct cleanup *uiout_cleanup;
   struct ui_out *uiout;
-  unsigned int last, begin, end;
+  unsigned int low, high;
+  int found;
 
   uiout = current_uiout;
   uiout_cleanup = make_cleanup_ui_out_tuple_begin_end (uiout,
 						       "func history");
-  btinfo = require_btrace ();
-  last = VEC_length (btrace_func_s, btinfo->ftrace);
+  low = (unsigned int) from;
+  high = (unsigned int) to;
 
-  begin = (unsigned int) from;
-  end = (unsigned int) to;
-
-  DEBUG ("func-history (0x%x): [%u; %u[", flags, begin, end);
+  DEBUG ("call-history (0x%x): [%u; %u[", flags, low, high);
 
   /* Check for wrap-arounds.  */
-  if (begin != from || end != to)
+  if (low != from || high != to)
     error (_("Bad range."));
 
-  if (end <= begin)
+  if (high <= low)
     error (_("Bad range."));
 
-  if (last <= begin)
-    error (_("Range out of bounds."));
+  btinfo = require_btrace ();
 
-  /* Truncate the range, if necessary.  */
-  if (last < end)
-    end = last;
+  found = btrace_find_call_by_number (&begin, btinfo, low);
+  if (found == 0)
+    error (_("Range out of bounds."));
 
-  btrace_func_history (btinfo, uiout, begin, end, flags);
+  /* Silently truncate the range, if necessary.  */
+  found = btrace_find_call_by_number (&end, btinfo, high);
+  if (found == 0)
+    btrace_call_end (&end, btinfo);
 
-  btinfo->func_iterator.begin = begin;
-  btinfo->func_iterator.end = end;
+  btrace_call_history (uiout, &begin, &end, flags);
+  btrace_set_call_history (btinfo, &begin, &end);
 
   do_cleanups (uiout_cleanup);
 }
diff --git a/gdb/testsuite/gdb.btrace/function_call_history.exp b/gdb/testsuite/gdb.btrace/function_call_history.exp
index 97447e1..7658637 100644
--- a/gdb/testsuite/gdb.btrace/function_call_history.exp
+++ b/gdb/testsuite/gdb.btrace/function_call_history.exp
@@ -204,16 +204,18 @@ set bp_location [gdb_get_line_number "bp.2" $testfile.c]
 gdb_breakpoint $bp_location
 gdb_continue_to_breakpoint "cont to $bp_location" ".*$testfile.c:$bp_location.*"
 
-# at this point we expect to have main, fib, ..., fib, main, where fib occurs 8 times,
-# so we limit the output to only show the latest 10 function calls
-gdb_test_no_output "set record function-call-history-size 10"
-set message "show recursive function call history"
-gdb_test_multiple "record function-call-history" $message {
-    -re "13\tmain\r\n14\tfib\r\n15\tfib\r\n16\tfib\r\n17\tfib\r\n18\tfib\r\n19\tfib\r\n20\tfib\r\n21\tfib\r\n22	 main\r\n$gdb_prompt $" {
-        pass $message
-    }
-    -re "13\tinc\r\n14\tmain\r\n15\tinc\r\n16\tmain\r\n17\tinc\r\n18\tmain\r\n19\tinc\r\n20\tmain\r\n21\tfib\r\n22\tmain\r\n$gdb_prompt $" {
-        # recursive function calls appear only as 1 call
-        kfail "gdb/15240" $message
-    }
-}
+# at this point we expect to have main, fib, ..., fib, main, where fib occurs 9 times,
+# so we limit the output to only show the latest 11 function calls
+gdb_test_no_output "set record function-call-history-size 11"
+gdb_test "record function-call-history" "
+20\tmain\r
+21\tfib\r
+22\tfib\r
+23\tfib\r
+24\tfib\r
+25\tfib\r
+26\tfib\r
+27\tfib\r
+28\tfib\r
+29\tfib\r
+30\tmain" "show recursive function call history"
diff --git a/gdb/testsuite/gdb.btrace/instruction_history.exp b/gdb/testsuite/gdb.btrace/instruction_history.exp
index c1a61b7..bd25404 100644
--- a/gdb/testsuite/gdb.btrace/instruction_history.exp
+++ b/gdb/testsuite/gdb.btrace/instruction_history.exp
@@ -56,9 +56,9 @@ gdb_test_multiple "info record" $testname {
     }
 }
 
-# we have exactly 7 instructions here
-set message "exactly 7 instructions"
-if { $traced != 7 } {
+# we have exactly 6 instructions here
+set message "exactly 6 instructions"
+if { $traced != 6 } {
     fail $message
 } else {
     pass $message
@@ -144,6 +144,8 @@ if { $lines != $history_size } {
     pass $message
 }
 
+set history_size 2
+gdb_test_no_output "set record instruction-history-size $history_size"
 set message "browse history forward middle"
 set lines [test_lines_length "record instruction-history +" $message]
 if { $lines != $history_size } {
@@ -165,6 +167,8 @@ gdb_test "record instruction-history" "At the end of the branch trace record\\."
 # make sure we cannot move further
 gdb_test "record instruction-history" "At the end of the branch trace record\\." "browse history forward beyond 2"
 
+set history_size 3
+gdb_test_no_output "set record instruction-history-size $history_size"
 set message "browse history backward last"
 set lines [test_lines_length "record instruction-history -" $message]
 if { $lines != $history_size } {
@@ -173,6 +177,8 @@ if { $lines != $history_size } {
     pass $message
 }
 
+set history_size 2
+gdb_test_no_output "set record instruction-history-size $history_size"
 set message "browse history backward middle"
 set lines [test_lines_length "record instruction-history -" $message]
 if { $lines != $history_size } {
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 20/24] btrace, gdbserver: read branch trace incrementally
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
  2013-07-03  9:14 ` [patch v4 05/24] record-btrace: start counting at one Markus Metzger
  2013-07-03  9:14 ` [patch v4 24/24] record-btrace: skip tail calls in back trace Markus Metzger
@ 2013-07-03  9:14 ` Markus Metzger
  2013-08-18 19:09   ` Jan Kratochvil
  2013-07-03  9:14 ` [patch v4 10/24] target: add ops parameter to to_prepare_to_store method Markus Metzger
                   ` (21 subsequent siblings)
  24 siblings, 1 reply; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:14 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches, Pedro Alves

Read branch trace data incrementally and extend the current trace rather than
discarding it and reading the entire trace buffer each time.

If the branch trace buffer overflowed, we can't extend the current trace so we
discard it and start anew by reading the entire branch trace buffer.

Reviewed-by: Eli Zaretskii  <eliz@gnu.org>
CC: Pedro Alves  <palves@redhat.com>
2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>

	* common/linux-btrace.c (perf_event_read_bts, linux_read_btrace):
	Support delta reads.
	* common/linux-btrace.h (linux_read_btrace): Change parameters
	and return type to allow error reporting.
	* common/btrace-common.h (btrace_read_type)<btrace_read_delta>:
	New.
	* btrace.c (btrace_compute_ftrace): Start from the end of
	the current trace.
	(btrace_stitch_trace, btrace_clear_history): New.
	(btrace_fetch): Read delta trace.
	(btrace_clear): Move clear history code to btrace_clear_history.
	(parse_xml_btrace): Throw an error if parsing failed.
	* target.h (struct target_ops)<to_read_btrace>: Change parameters
	and return type to allow error reporting.
	(target_read_btrace): Change parameters and return type to allow
	error reporting.
	* target.c (target_read_btrace): Update.
	* remote.c (remote_read_btrace): Support delta reads.  Pass
	errors on.

gdbserver/
	* target.h (target_ops)<read_btrace>: Change parameters and
	return type to allow error reporting.
	* server.c (handle_qxfer_btrace): Support delta reads.  Pass
	trace reading errors on.
	* linux-low.c (linux_low_read_btrace): Pass trace reading
	errors on.


---
 gdb/NEWS                   |    4 +
 gdb/btrace.c               |  136 ++++++++++++++++++++++++++++++++++++++------
 gdb/common/btrace-common.h |    6 ++-
 gdb/common/linux-btrace.c  |   84 +++++++++++++++++++--------
 gdb/common/linux-btrace.h  |    5 +-
 gdb/doc/gdb.texinfo        |    8 +++
 gdb/gdbserver/linux-low.c  |   18 +++++-
 gdb/gdbserver/server.c     |   11 +++-
 gdb/gdbserver/target.h     |    6 +-
 gdb/remote.c               |   23 ++++---
 gdb/target.c               |    9 ++-
 gdb/target.h               |   14 +++--
 12 files changed, 254 insertions(+), 70 deletions(-)

diff --git a/gdb/NEWS b/gdb/NEWS
index 9b9de71..433a968 100644
--- a/gdb/NEWS
+++ b/gdb/NEWS
@@ -124,6 +124,10 @@ qXfer:libraries-svr4:read's annex
   necessary for library list updating, resulting in significant
   speedup.
 
+qXfer:btrace:read's annex
+  The qXfer:btrace:read packet supports a new annex 'delta' to read
+  branch trace incrementally.
+
 * New features in the GDB remote stub, GDBserver
 
   ** GDBserver now supports target-assisted range stepping.  Currently
diff --git a/gdb/btrace.c b/gdb/btrace.c
index 822926c..072e9d3 100644
--- a/gdb/btrace.c
+++ b/gdb/btrace.c
@@ -600,9 +600,9 @@ btrace_compute_ftrace (struct btrace_thread_info *btinfo,
   DEBUG ("compute ftrace");
 
   gdbarch = target_gdbarch ();
-  begin = NULL;
-  end = NULL;
-  level = INT_MAX;
+  begin = btinfo->begin;
+  end = btinfo->end;
+  level = begin != NULL ? -btinfo->level : INT_MAX;
   blk = VEC_length (btrace_block_s, btrace);
 
   while (blk != 0)
@@ -718,27 +718,138 @@ btrace_teardown (struct thread_info *tp)
   btrace_clear (tp);
 }
 
+/* Adjust the block trace in order to stitch old and new trace together.
+   Return 0 on success; -1, otherwise.  */
+
+static int
+btrace_stitch_trace (VEC (btrace_block_s) **btrace,
+		     const struct btrace_thread_info *btinfo)
+{
+  struct btrace_function *end;
+  struct btrace_insn *insn;
+  btrace_block_s *block;
+
+  /* If we don't have trace, there's nothing to do.  */
+  if (VEC_empty (btrace_block_s, *btrace))
+    return 0;
+
+  end = btinfo->end;
+  gdb_assert (end != NULL);
+
+  block = VEC_last (btrace_block_s, *btrace);
+  insn = VEC_last (btrace_insn_s, end->insn);
+
+  /* Check if we can extend the trace.  */
+  if (block->end < insn->pc)
+    return -1;
+
+  /* If the current PC at the end of the block is the same as in our current
+     trace, there are two explanations:
+       1. we executed the instruction and some branch brought us back.
+       2. we have not made any progress.
+     In the first case, the delta trace vector should contain at least two
+     entries.
+     In the second case, the delta trace vector should contain exactly one
+     entry for the partial block containing the current PC.  Remove it.  */
+  if (block->end == insn->pc && VEC_length (btrace_block_s, *btrace) == 1)
+    {
+      VEC_pop (btrace_block_s, *btrace);
+      return 0;
+    }
+
+  DEBUG ("stitching %s to %s", ftrace_print_insn_addr (insn),
+	 core_addr_to_string_nz (block->end));
+
+  /* We adjust the last block to start at the end of our current trace.  */
+  gdb_assert (block->begin == 0);
+  block->begin = insn->pc;
+
+  /* We simply pop the last insn so we can insert it again as part of
+     the normal branch trace computation.
+     Since instruction iterators are based on indices in the instructions
+     vector, we don't leave any pointers dangling.  */
+  DEBUG ("pruning insn at %s for stitching", ftrace_print_insn_addr (insn));
+
+  VEC_pop (btrace_insn_s, end->insn);
+
+  /* The instructions vector may become empty temporarily if this has
+     been the only instruction in this function segment.
+     This violates the invariant but will be remedied shortly.  */
+  return 0;
+}
+
+/* Clear the branch trace histories in BTINFO.  */
+
+static void
+btrace_clear_history (struct btrace_thread_info *btinfo)
+{
+  xfree (btinfo->insn_history);
+  xfree (btinfo->call_history);
+  xfree (btinfo->replay);
+
+  btinfo->insn_history = NULL;
+  btinfo->call_history = NULL;
+  btinfo->replay = NULL;
+}
+
 /* See btrace.h.  */
 
 void
 btrace_fetch (struct thread_info *tp)
 {
   struct btrace_thread_info *btinfo;
+  struct btrace_target_info *tinfo;
   VEC (btrace_block_s) *btrace;
   struct cleanup *cleanup;
+  int errcode;
 
   DEBUG ("fetch thread %d (%s)", tp->num, target_pid_to_str (tp->ptid));
 
+  btrace = NULL;
   btinfo = &tp->btrace;
-  if (btinfo->target == NULL)
+  tinfo = btinfo->target;
+  if (tinfo == NULL)
     return;
 
-  btrace = target_read_btrace (btinfo->target, btrace_read_new);
   cleanup = make_cleanup (VEC_cleanup (btrace_block_s), &btrace);
 
+  /* Let's first try to extend the trace we already have.  */
+  if (btinfo->end != NULL)
+    {
+      errcode = target_read_btrace (&btrace, tinfo, btrace_read_delta);
+      if (errcode == 0)
+	{
+	  /* Success.  Let's try to stitch the traces together.  */
+	  errcode = btrace_stitch_trace (&btrace, btinfo);
+	}
+      else
+	{
+	  /* We failed to read delta trace.  Let's try to read new trace.  */
+	  errcode = target_read_btrace (&btrace, tinfo, btrace_read_new);
+
+	  /* If we got any new trace, discard what we have.  */
+	  if (errcode == 0 && !VEC_empty (btrace_block_s, btrace))
+	    btrace_clear (tp);
+	}
+
+      /* If we were not able to read the trace, we start over.  */
+      if (errcode != 0)
+	{
+	  btrace_clear (tp);
+	  errcode = target_read_btrace (&btrace, tinfo, btrace_read_all);
+	}
+    }
+  else
+    errcode = target_read_btrace (&btrace, tinfo, btrace_read_all);
+
+  /* If we were not able to read the branch trace, signal an error.  */
+  if (errcode != 0)
+    error ("Failed to read branch trace.");
+
+  /* Compute the trace, provided we have any.  */
   if (!VEC_empty (btrace_block_s, btrace))
     {
-      btrace_clear (tp);
+      btrace_clear_history (btinfo);
       btrace_compute_ftrace (btinfo, btrace);
     }
 
@@ -773,13 +884,7 @@ btrace_clear (struct thread_info *tp)
   btinfo->begin = NULL;
   btinfo->end = NULL;
 
-  xfree (btinfo->insn_history);
-  xfree (btinfo->call_history);
-  xfree (btinfo->replay);
-
-  btinfo->insn_history = NULL;
-  btinfo->call_history = NULL;
-  btinfo->replay = NULL;
+  btrace_clear_history (btinfo);
 }
 
 /* See btrace.h.  */
@@ -871,10 +976,7 @@ parse_xml_btrace (const char *buffer)
   errcode = gdb_xml_parse_quick (_("btrace"), "btrace.dtd", btrace_elements,
 				 buffer, &btrace);
   if (errcode != 0)
-    {
-      do_cleanups (cleanup);
-      return NULL;
-    }
+    error (_("Error parsing branch trace."));
 
   /* Keep parse results.  */
   discard_cleanups (cleanup);
diff --git a/gdb/common/btrace-common.h b/gdb/common/btrace-common.h
index b157c7c..e863a65 100644
--- a/gdb/common/btrace-common.h
+++ b/gdb/common/btrace-common.h
@@ -67,7 +67,11 @@ enum btrace_read_type
   btrace_read_all,
 
   /* Send all available trace, if it changed.  */
-  btrace_read_new
+  btrace_read_new,
+
+  /* Send the trace since the last request.  This will fail if the trace
+     buffer overflowed.  */
+  btrace_read_delta
 };
 
 #endif /* BTRACE_COMMON_H */
diff --git a/gdb/common/linux-btrace.c b/gdb/common/linux-btrace.c
index b30a6ec..649b535 100644
--- a/gdb/common/linux-btrace.c
+++ b/gdb/common/linux-btrace.c
@@ -169,11 +169,11 @@ perf_event_sample_ok (const struct perf_event_sample *sample)
 
 static VEC (btrace_block_s) *
 perf_event_read_bts (struct btrace_target_info* tinfo, const uint8_t *begin,
-		     const uint8_t *end, const uint8_t *start)
+		     const uint8_t *end, const uint8_t *start, size_t size)
 {
   VEC (btrace_block_s) *btrace = NULL;
   struct perf_event_sample sample;
-  size_t read = 0, size = (end - begin);
+  size_t read = 0;
   struct btrace_block block = { 0, 0 };
   struct regcache *regcache;
 
@@ -249,6 +249,12 @@ perf_event_read_bts (struct btrace_target_info* tinfo, const uint8_t *begin,
       block.end = psample->bts.from;
     }
 
+  /* Push the last block, as well.  We don't know where it ends, but we
+     know where it starts.  If we're reading delta trace, we can fill in the
+     start address later on.  Otherwise, we will prune it.  */
+  block.begin = 0;
+  VEC_safe_push (btrace_block_s, btrace, &block);
+
   return btrace;
 }
 
@@ -501,21 +507,24 @@ linux_btrace_has_changed (struct btrace_target_info *tinfo)
 
 /* See linux-btrace.h.  */
 
-VEC (btrace_block_s) *
-linux_read_btrace (struct btrace_target_info *tinfo,
+int
+linux_read_btrace (VEC (btrace_block_s) **btrace,
+		   struct btrace_target_info *tinfo,
 		   enum btrace_read_type type)
 {
-  VEC (btrace_block_s) *btrace = NULL;
   volatile struct perf_event_mmap_page *header;
   const uint8_t *begin, *end, *start;
-  unsigned long data_head, retries = 5;
-  size_t buffer_size;
+  unsigned long data_head, data_tail, retries = 5;
+  size_t buffer_size, size;
 
+  /* For delta reads, we return at least the partial last block containing
+     the current PC.  */
   if (type == btrace_read_new && !linux_btrace_has_changed (tinfo))
-    return NULL;
+    return 0;
 
   header = perf_event_header (tinfo);
   buffer_size = perf_event_buffer_size (tinfo);
+  data_tail = tinfo->data_head;
 
   /* We may need to retry reading the trace.  See below.  */
   while (retries--)
@@ -523,23 +532,45 @@ linux_read_btrace (struct btrace_target_info *tinfo,
       data_head = header->data_head;
 
       /* Delete any leftover trace from the previous iteration.  */
-      VEC_truncate (btrace_block_s, btrace, 0);
+      VEC_truncate (btrace_block_s, *btrace, 0);
 
-      /* If there's new trace, let's read it.  */
-      if (data_head != tinfo->data_head)
+      if (type == btrace_read_delta)
 	{
-	  /* Data_head keeps growing; the buffer itself is circular.  */
-	  begin = perf_event_buffer_begin (tinfo);
-	  start = begin + data_head % buffer_size;
-
-	  if (data_head <= buffer_size)
-	    end = start;
-	  else
-	    end = perf_event_buffer_end (tinfo);
+	  /* Determine the number of bytes to read and check for buffer
+	     overflows.  */
+
+	  /* Check for data head overflows.  We might be able to recover from
+	     those but they are very unlikely and it's not really worth the
+	     effort, I think.  */
+	  if (data_head < data_tail)
+	    return -EOVERFLOW;
+
+	  /* If the buffer is smaller than the trace delta, we overflowed.  */
+	  size = data_head - data_tail;
+	  if (buffer_size < size)
+	    return -EOVERFLOW;
+	}
+      else
+	{
+	  /* Read the entire buffer.  */
+	  size = buffer_size;
 
-	  btrace = perf_event_read_bts (tinfo, begin, end, start);
+	  /* Adjust the size if the buffer has not overflowed, yet.  */
+	  if (data_head < size)
+	    size = data_head;
 	}
 
+      /* Data_head keeps growing; the buffer itself is circular.  */
+      begin = perf_event_buffer_begin (tinfo);
+      start = begin + data_head % buffer_size;
+
+      if (data_head <= buffer_size)
+	end = start;
+      else
+	end = perf_event_buffer_end (tinfo);
+
+      *btrace = perf_event_read_bts (tinfo, begin, end, start, size);
+
       /* The stopping thread notifies its ptracer before it is scheduled out.
 	 On multi-core systems, the debugger might therefore run while the
 	 kernel might be writing the last branch trace records.
@@ -551,7 +582,11 @@ linux_read_btrace (struct btrace_target_info *tinfo,
 
   tinfo->data_head = data_head;
 
-  return btrace;
+  /* Prune the incomplete last block if we're not doing a delta read.  */
+  if (!VEC_empty (btrace_block_s, *btrace) && type != btrace_read_delta)
+    VEC_pop (btrace_block_s, *btrace);
+
+  return 0;
 }
 
 #else /* !HAVE_LINUX_PERF_EVENT_H */
@@ -582,11 +617,12 @@ linux_disable_btrace (struct btrace_target_info *tinfo)
 
 /* See linux-btrace.h.  */
 
-VEC (btrace_block_s) *
-linux_read_btrace (struct btrace_target_info *tinfo,
+int
+linux_read_btrace (VEC (btrace_block_s) **btrace,
+		   struct btrace_target_info *tinfo,
 		   enum btrace_read_type type)
 {
-  return NULL;
+  return ENOSYS;
 }
 
 #endif /* !HAVE_LINUX_PERF_EVENT_H */
diff --git a/gdb/common/linux-btrace.h b/gdb/common/linux-btrace.h
index d4e8402..82397b7 100644
--- a/gdb/common/linux-btrace.h
+++ b/gdb/common/linux-btrace.h
@@ -71,7 +71,8 @@ extern struct btrace_target_info *linux_enable_btrace (ptid_t ptid);
 extern int linux_disable_btrace (struct btrace_target_info *tinfo);
 
 /* Read branch trace data.  */
-extern VEC (btrace_block_s) *linux_read_btrace (struct btrace_target_info *,
-						enum btrace_read_type);
+extern int linux_read_btrace (VEC (btrace_block_s) **,
+			      struct btrace_target_info *,
+			      enum btrace_read_type);
 
 #endif /* LINUX_BTRACE_H */
diff --git a/gdb/doc/gdb.texinfo b/gdb/doc/gdb.texinfo
index eb4896f..2dc45bc 100644
--- a/gdb/doc/gdb.texinfo
+++ b/gdb/doc/gdb.texinfo
@@ -39161,6 +39161,14 @@ Returns all available branch trace.
 @item new
 Returns all available branch trace if the branch trace changed since
 the last read request.
+
+@item delta
+Returns the new branch trace since the last read request.  Adds a new
+block to the end of the trace that begins at zero and ends at the source
+location of the first branch in the trace buffer.  This extra block is
+used to stitch traces together.
+
+If the trace buffer overflowed, returns an error indicating the overflow.
 @end table
 
 This packet is not probed by default; the remote stub must request it
diff --git a/gdb/gdbserver/linux-low.c b/gdb/gdbserver/linux-low.c
index 47ea76d..709405c 100644
--- a/gdb/gdbserver/linux-low.c
+++ b/gdb/gdbserver/linux-low.c
@@ -5964,15 +5964,25 @@ linux_low_enable_btrace (ptid_t ptid)
 
 /* Read branch trace data as btrace xml document.  */
 
-static void
+static int
 linux_low_read_btrace (struct btrace_target_info *tinfo, struct buffer *buffer,
 		       int type)
 {
   VEC (btrace_block_s) *btrace;
   struct btrace_block *block;
-  int i;
+  int i, errcode;
+
+  btrace = NULL;
+  errcode = linux_read_btrace (&btrace, tinfo, type);
+  if (errcode != 0)
+    {
+      if (errcode == -EOVERFLOW)
+	buffer_grow_str (buffer, "E.Overflow.");
+      else
+	buffer_grow_str (buffer, "E.Generic Error.");
 
-  btrace = linux_read_btrace (tinfo, type);
+      return -1;
+    }
 
   buffer_grow_str (buffer, "<!DOCTYPE btrace SYSTEM \"btrace.dtd\">\n");
   buffer_grow_str (buffer, "<btrace version=\"1.0\">\n");
@@ -5984,6 +5994,8 @@ linux_low_read_btrace (struct btrace_target_info *tinfo, struct buffer *buffer,
   buffer_grow_str (buffer, "</btrace>\n");
 
   VEC_free (btrace_block_s, btrace);
+
+  return 0;
 }
 #endif /* HAVE_LINUX_BTRACE */
 
diff --git a/gdb/gdbserver/server.c b/gdb/gdbserver/server.c
index a172c98..c518f62 100644
--- a/gdb/gdbserver/server.c
+++ b/gdb/gdbserver/server.c
@@ -1343,7 +1343,7 @@ handle_qxfer_btrace (const char *annex,
 {
   static struct buffer cache;
   struct thread_info *thread;
-  int type;
+  int type, result;
 
   if (the_target->read_btrace == NULL || writebuf != NULL)
     return -2;
@@ -1375,6 +1375,8 @@ handle_qxfer_btrace (const char *annex,
     type = btrace_read_all;
   else if (strcmp (annex, "new") == 0)
     type = btrace_read_new;
+  else if (strcmp (annex, "delta") == 0)
+    type = btrace_read_delta;
   else
     {
       strcpy (own_buf, "E.Bad annex.");
@@ -1385,7 +1387,12 @@ handle_qxfer_btrace (const char *annex,
     {
       buffer_free (&cache);
 
-      target_read_btrace (thread->btrace, &cache, type);
+      result = target_read_btrace (thread->btrace, &cache, type);
+      if (result != 0)
+	{
+	  memcpy (own_buf, cache.buffer, cache.used_size);
+	  return -3;
+	}
     }
   else if (offset > cache.used_size)
     {
diff --git a/gdb/gdbserver/target.h b/gdb/gdbserver/target.h
index c57cb40..1bb1f23 100644
--- a/gdb/gdbserver/target.h
+++ b/gdb/gdbserver/target.h
@@ -420,8 +420,10 @@ struct target_ops
   int (*disable_btrace) (struct btrace_target_info *tinfo);
 
   /* Read branch trace data into buffer.  We use an int to specify the type
-     to break a cyclic dependency.  */
-  void (*read_btrace) (struct btrace_target_info *, struct buffer *, int type);
+     to break a cyclic dependency.
+     Return 0 on success; print an error message into BUFFER and return -1,
+     otherwise.  */
+  int (*read_btrace) (struct btrace_target_info *, struct buffer *, int type);
 
   /* Return true if target supports range stepping.  */
   int (*supports_range_stepping) (void);
diff --git a/gdb/remote.c b/gdb/remote.c
index b352ca6..705aa66 100644
--- a/gdb/remote.c
+++ b/gdb/remote.c
@@ -11417,13 +11417,14 @@ remote_teardown_btrace (struct btrace_target_info *tinfo)
 
 /* Read the branch trace.  */
 
-static VEC (btrace_block_s) *
-remote_read_btrace (struct btrace_target_info *tinfo,
+static int
+remote_read_btrace (VEC (btrace_block_s) **btrace,
+		    struct btrace_target_info *tinfo,
 		    enum btrace_read_type type)
 {
   struct packet_config *packet = &remote_protocol_packets[PACKET_qXfer_btrace];
   struct remote_state *rs = get_remote_state ();
-  VEC (btrace_block_s) *btrace = NULL;
+  struct cleanup *cleanup;
   const char *annex;
   char *xml;
 
@@ -11442,6 +11443,9 @@ remote_read_btrace (struct btrace_target_info *tinfo,
     case btrace_read_new:
       annex = "new";
       break;
+    case btrace_read_delta:
+      annex = "delta";
+      break;
     default:
       internal_error (__FILE__, __LINE__,
 		      _("Bad branch tracing read type: %u."),
@@ -11450,15 +11454,14 @@ remote_read_btrace (struct btrace_target_info *tinfo,
 
   xml = target_read_stralloc (&current_target,
                               TARGET_OBJECT_BTRACE, annex);
-  if (xml != NULL)
-    {
-      struct cleanup *cleanup = make_cleanup (xfree, xml);
+  if (xml == NULL)
+    return -1;
 
-      btrace = parse_xml_btrace (xml);
-      do_cleanups (cleanup);
-    }
+  cleanup = make_cleanup (xfree, xml);
+  *btrace = parse_xml_btrace (xml);
+  do_cleanups (cleanup);
 
-  return btrace;
+  return 0;
 }
 
 static int
diff --git a/gdb/target.c b/gdb/target.c
index 58388f3..33f774e 100644
--- a/gdb/target.c
+++ b/gdb/target.c
@@ -4237,18 +4237,19 @@ target_teardown_btrace (struct btrace_target_info *btinfo)
 
 /* See target.h.  */
 
-VEC (btrace_block_s) *
-target_read_btrace (struct btrace_target_info *btinfo,
+int
+target_read_btrace (VEC (btrace_block_s) **btrace,
+		    struct btrace_target_info *btinfo,
 		    enum btrace_read_type type)
 {
   struct target_ops *t;
 
   for (t = current_target.beneath; t != NULL; t = t->beneath)
     if (t->to_read_btrace != NULL)
-      return t->to_read_btrace (btinfo, type);
+      return t->to_read_btrace (btrace, btinfo, type);
 
   tcomplain ();
-  return NULL;
+  return ENOSYS;
 }
 
 /* See target.h.  */
diff --git a/gdb/target.h b/gdb/target.h
index 632bf1d..4a20533 100644
--- a/gdb/target.h
+++ b/gdb/target.h
@@ -882,9 +882,12 @@ struct target_ops
        be attempting to talk to a remote target.  */
     void (*to_teardown_btrace) (struct btrace_target_info *tinfo);
 
-    /* Read branch trace data.  */
-    VEC (btrace_block_s) *(*to_read_btrace) (struct btrace_target_info *,
-					     enum btrace_read_type);
+    /* Read branch trace data into DATA.  The vector is cleared before any
+       new data is added.
+       Returns 0 on success; a negative error code, otherwise.  */
+    int (*to_read_btrace) (VEC (btrace_block_s) **data,
+			   struct btrace_target_info *,
+			   enum btrace_read_type);
 
     /* Stop trace recording.  */
     void (*to_stop_recording) (void);
@@ -2010,8 +2013,9 @@ extern void target_disable_btrace (struct btrace_target_info *btinfo);
 extern void target_teardown_btrace (struct btrace_target_info *btinfo);
 
 /* See to_read_btrace in struct target_ops.  */
-extern VEC (btrace_block_s) *target_read_btrace (struct btrace_target_info *,
-						 enum btrace_read_type);
+extern int target_read_btrace (VEC (btrace_block_s) **,
+			       struct btrace_target_info *,
+			       enum btrace_read_type);
 
 /* See to_stop_recording in struct target_ops.  */
 extern void target_stop_recording (void);
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 08/24] record-btrace: make ranges include begin and end
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
                   ` (5 preceding siblings ...)
  2013-07-03  9:14 ` [patch v4 07/24] record-btrace: optionally indent function call history Markus Metzger
@ 2013-07-03  9:14 ` Markus Metzger
  2013-08-18 19:12   ` Jan Kratochvil
  2013-07-03  9:14 ` [patch v4 16/24] record-btrace: provide target_find_new_threads method Markus Metzger
                   ` (17 subsequent siblings)
  24 siblings, 1 reply; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:14 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches, Christian Himpel

The "record function-call-history" and "record instruction-history" commands
accept a range "begin, end".  End is not included in both cases.  Include it.

Reviewed-by: Eli Zaretskii  <eliz@gnu.org>
CC: Christian Himpel  <christian.himpel@intel.com>
2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>

	* record-btrace.c (record_btrace_insn_history_range): Include
	end.
	(record_btrace_insn_history_from): Adjust range.
	(record_btrace_call_history_range): Include
	end.
	(record_btrace_call_history_from): Adjust range.

testsuite/
	* gdb.btrace/function_call_history.exp: Update tests.
	* gdb.btrace/instruction_history.exp: Update tests.

doc/
	* gdb.texinfo (Process Record and Replay): Update documentation.


---
 gdb/doc/gdb.texinfo                                |    6 +--
 gdb/record-btrace.c                                |   34 +++++++++++++++-----
 gdb/testsuite/gdb.btrace/function_call_history.exp |    4 +-
 gdb/testsuite/gdb.btrace/instruction_history.exp   |    6 ++--
 4 files changed, 33 insertions(+), 17 deletions(-)

diff --git a/gdb/doc/gdb.texinfo b/gdb/doc/gdb.texinfo
index 2cfc20b..eb4896f 100644
--- a/gdb/doc/gdb.texinfo
+++ b/gdb/doc/gdb.texinfo
@@ -6393,8 +6393,7 @@ Disassembles ten more instructions before the last disassembly.
 
 @item record instruction-history @var{begin} @var{end}
 Disassembles instructions beginning with instruction number
-@var{begin} until instruction number @var{end}.  The instruction
-number @var{end} is not included.
+@var{begin} until instruction number @var{end}.
 @end table
 
 This command may not be available for all recording methods.
@@ -6464,8 +6463,7 @@ Prints ten more functions before the last ten-line print.
 
 @item record function-call-history @var{begin} @var{end}
 Prints functions beginning with function number @var{begin} until
-function number @var{end}.  The function number @var{end} is not
-included.
+function number @var{end}.
 @end table
 
 This command may not be available for all recording methods.
diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
index 99dc046..c7d6e9f 100644
--- a/gdb/record-btrace.c
+++ b/gdb/record-btrace.c
@@ -377,10 +377,17 @@ record_btrace_insn_history_range (ULONGEST from, ULONGEST to, int flags)
   if (found == 0)
     error (_("Range out of bounds."));
 
-  /* Silently truncate the range, if necessary.  */
   found = btrace_find_insn_by_number (&end, btinfo, high);
   if (found == 0)
-    btrace_insn_end (&end, btinfo);
+    {
+      /* Silently truncate the range.  */
+      btrace_insn_end (&end, btinfo);
+    }
+  else
+    {
+      /* We want both begin and end to be inclusive.  */
+      btrace_insn_next (&end, 1);
+    }
 
   btrace_insn_history (uiout, &begin, &end, flags);
   btrace_set_insn_history (btinfo, &begin, &end);
@@ -396,6 +403,8 @@ record_btrace_insn_history_from (ULONGEST from, int size, int flags)
   ULONGEST begin, end, context;
 
   context = abs (size);
+  if (context == 0)
+    error (_("Bad record instruction-history-size."));
 
   if (size < 0)
     {
@@ -404,12 +413,12 @@ record_btrace_insn_history_from (ULONGEST from, int size, int flags)
       if (from < context)
 	begin = 0;
       else
-	begin = from - context;
+	begin = from - context + 1;
     }
   else
     {
       begin = from;
-      end = from + context;
+      end = from + context - 1;
 
       /* Check for wrap-around.  */
       if (end < begin)
@@ -629,10 +638,17 @@ record_btrace_call_history_range (ULONGEST from, ULONGEST to, int flags)
   if (found == 0)
     error (_("Range out of bounds."));
 
-  /* Silently truncate the range, if necessary.  */
   found = btrace_find_call_by_number (&end, btinfo, high);
   if (found == 0)
-    btrace_call_end (&end, btinfo);
+    {
+      /* Silently truncate the range.  */
+      btrace_call_end (&end, btinfo);
+    }
+  else
+    {
+      /* We want both begin and end to be inclusive.  */
+      btrace_call_next (&end, 1);
+    }
 
   btrace_call_history (uiout, btinfo, &begin, &end, flags);
   btrace_set_call_history (btinfo, &begin, &end);
@@ -648,6 +664,8 @@ record_btrace_call_history_from (ULONGEST from, int size, int flags)
   ULONGEST begin, end, context;
 
   context = abs (size);
+  if (context == 0)
+    error (_("Bad record function-call-history-size."));
 
   if (size < 0)
     {
@@ -656,12 +674,12 @@ record_btrace_call_history_from (ULONGEST from, int size, int flags)
       if (from < context)
 	begin = 0;
       else
-	begin = from - context;
+	begin = from - context + 1;
     }
   else
     {
       begin = from;
-      end = from + context;
+      end = from + context - 1;
 
       /* Check for wrap-around.  */
       if (end < begin)
diff --git a/gdb/testsuite/gdb.btrace/function_call_history.exp b/gdb/testsuite/gdb.btrace/function_call_history.exp
index 754cbbe..901e487 100644
--- a/gdb/testsuite/gdb.btrace/function_call_history.exp
+++ b/gdb/testsuite/gdb.btrace/function_call_history.exp
@@ -222,9 +222,9 @@ set expected_range "4\tinc\r
 10\tinc\r"
 
 # show functions in instruction range
-gdb_test "record function-call-history 4,11" $expected_range "absolute instruction range"
+gdb_test "record function-call-history 4,10" $expected_range "absolute instruction range"
 gdb_test "record function-call-history 4,+7" $expected_range "relative positive instruction range"
-gdb_test "record function-call-history 11,-7" $expected_range "relative negative instruction range"
+gdb_test "record function-call-history 10,-7" $expected_range "relative negative instruction range"
 
 # set bp after fib recursion and continue
 set bp_location [gdb_get_line_number "bp.2" $testfile.c]
diff --git a/gdb/testsuite/gdb.btrace/instruction_history.exp b/gdb/testsuite/gdb.btrace/instruction_history.exp
index df2728b..e7a0e8e 100644
--- a/gdb/testsuite/gdb.btrace/instruction_history.exp
+++ b/gdb/testsuite/gdb.btrace/instruction_history.exp
@@ -65,7 +65,7 @@ if { $traced != 6 } {
 }
 
 # test that we see the expected instructions
-gdb_test "record instruction-history 2,7" "
+gdb_test "record instruction-history 2,6" "
 2\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
 3\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tdec    %eax\r
 4\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tjmp    0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
@@ -79,14 +79,14 @@ gdb_test "record instruction-history /f 2,+5" "
 5\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tcmp    \\\$0x0,%eax\r
 6\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r"
 
-gdb_test "record instruction-history /p 7,-5" "
+gdb_test "record instruction-history /p 6,-5" "
 2\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
 3\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tdec    %eax\r
 4\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tjmp    0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
 5\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tcmp    \\\$0x0,%eax\r
 6\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r"
 
-gdb_test "record instruction-history /pf 2,7" "
+gdb_test "record instruction-history /pf 2,6" "
 2\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
 3\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tdec    %eax\r
 4\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tjmp    0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 05/24] record-btrace: start counting at one
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
@ 2013-07-03  9:14 ` Markus Metzger
  2013-08-18 19:11   ` Jan Kratochvil
  2013-07-03  9:14 ` [patch v4 24/24] record-btrace: skip tail calls in back trace Markus Metzger
                   ` (23 subsequent siblings)
  24 siblings, 1 reply; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:14 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches

The record instruction-history and record-function-call-history commands start
counting instructions at zero.  This is somewhat unintuitive when we start
navigating in the recorded instruction history.  Start at one, instead.

2013-07-03  Markus Metzger <markus.t.metzger@intel.com>

    * btrace.c (ftrace_new_function): Start counting at one.

testsuite/
    * gdb.btrace/instruction_history.exp: Update.
    * gdb.btrace/function_call_history.exp: Update.


---
 gdb/btrace.c                                       |    8 +-
 gdb/record-btrace.c                                |    4 +-
 gdb/testsuite/gdb.btrace/function_call_history.exp |  198 ++++++++++----------
 gdb/testsuite/gdb.btrace/instruction_history.exp   |   60 +++---
 4 files changed, 138 insertions(+), 132 deletions(-)

diff --git a/gdb/btrace.c b/gdb/btrace.c
index 53549db..006deaa 100644
--- a/gdb/btrace.c
+++ b/gdb/btrace.c
@@ -212,7 +212,13 @@ ftrace_new_function (struct btrace_function *prev,
   bfun->lbegin = INT_MAX;
   bfun->lend = INT_MIN;
 
-  if (prev != NULL)
+  if (prev == NULL)
+    {
+      /* Start counting at one.  */
+      bfun->number = 1;
+      bfun->insn_offset = 1;
+    }
+  else
     {
       gdb_assert (prev->flow.next == NULL);
       prev->flow.next = bfun;
diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
index d9a2ba7..df69a41 100644
--- a/gdb/record-btrace.c
+++ b/gdb/record-btrace.c
@@ -227,11 +227,11 @@ record_btrace_info (void)
 
       btrace_call_end (&call, btinfo);
       btrace_call_prev (&call, 1);
-      calls = btrace_call_number (&call) + 1;
+      calls = btrace_call_number (&call);
 
       btrace_insn_end (&insn, btinfo);
       btrace_insn_prev (&insn, 1);
-      insns = btrace_insn_number (&insn) + 1;
+      insns = btrace_insn_number (&insn);
     }
 
   printf_unfiltered (_("Recorded %u instructions in %u functions for thread "
diff --git a/gdb/testsuite/gdb.btrace/function_call_history.exp b/gdb/testsuite/gdb.btrace/function_call_history.exp
index 7658637..d694d5c 100644
--- a/gdb/testsuite/gdb.btrace/function_call_history.exp
+++ b/gdb/testsuite/gdb.btrace/function_call_history.exp
@@ -40,81 +40,81 @@ gdb_continue_to_breakpoint "cont to $bp_location" ".*$testfile.c:$bp_location.*"
 # show function call history with unlimited size, we expect to see all 21 entries
 gdb_test_no_output "set record function-call-history-size 0"
 gdb_test "record function-call-history" "
-0\tmain\r
-1\tinc\r
-2\tmain\r
-3\tinc\r
-4\tmain\r
-5\tinc\r
-6\tmain\r
-7\tinc\r
-8\tmain\r
-9\tinc\r
-10\tmain\r
-11\tinc\r
-12\tmain\r
-13\tinc\r
-14\tmain\r
-15\tinc\r
-16\tmain\r
-17\tinc\r
-18\tmain\r
-19\tinc\r
-20\tmain\r" "record function-call-history - with size unlimited"
+1\tmain\r
+2\tinc\r
+3\tmain\r
+4\tinc\r
+5\tmain\r
+6\tinc\r
+7\tmain\r
+8\tinc\r
+9\tmain\r
+10\tinc\r
+11\tmain\r
+12\tinc\r
+13\tmain\r
+14\tinc\r
+15\tmain\r
+16\tinc\r
+17\tmain\r
+18\tinc\r
+19\tmain\r
+20\tinc\r
+21\tmain\r" "record function-call-history - with size unlimited"
 
 # show function call history with size of 21, we expect to see all 21 entries
 gdb_test_no_output "set record function-call-history-size 21"
 # show function call history
-gdb_test "record function-call-history 0" "
-0\tmain\r
-1\tinc\r
-2\tmain\r
-3\tinc\r
-4\tmain\r
-5\tinc\r
-6\tmain\r
-7\tinc\r
-8\tmain\r
-9\tinc\r
-10\tmain\r
-11\tinc\r
-12\tmain\r
-13\tinc\r
-14\tmain\r
-15\tinc\r
-16\tmain\r
-17\tinc\r
-18\tmain\r
-19\tinc\r
-20\tmain\r" "record function-call-history - show all 21 entries"
+gdb_test "record function-call-history 1" "
+1\tmain\r
+2\tinc\r
+3\tmain\r
+4\tinc\r
+5\tmain\r
+6\tinc\r
+7\tmain\r
+8\tinc\r
+9\tmain\r
+10\tinc\r
+11\tmain\r
+12\tinc\r
+13\tmain\r
+14\tinc\r
+15\tmain\r
+16\tinc\r
+17\tmain\r
+18\tinc\r
+19\tmain\r
+20\tinc\r
+21\tmain\r" "record function-call-history - show all 21 entries"
 
 # show first 15 entries
 gdb_test_no_output "set record function-call-history-size 15"
-gdb_test "record function-call-history 0" "
-0\tmain\r
-1\tinc\r
-2\tmain\r
-3\tinc\r
-4\tmain\r
-5\tinc\r
-6\tmain\r
-7\tinc\r
-8\tmain\r
-9\tinc\r
-10\tmain\r
-11\tinc\r
-12\tmain\r
-13\tinc\r
-14\tmain\r" "record function-call-history - show first 15 entries"
+gdb_test "record function-call-history 1" "
+1\tmain\r
+2\tinc\r
+3\tmain\r
+4\tinc\r
+5\tmain\r
+6\tinc\r
+7\tmain\r
+8\tinc\r
+9\tmain\r
+10\tinc\r
+11\tmain\r
+12\tinc\r
+13\tmain\r
+14\tinc\r
+15\tmain\r" "record function-call-history - show first 15 entries"
 
 # show last 6 entries
 gdb_test "record function-call-history +" "
-15\tinc\r
-16\tmain\r
-17\tinc\r
-18\tmain\r
-19\tinc\r
-20\tmain\r" "record function-call-history - show last 6 entries"
+16\tinc\r
+17\tmain\r
+18\tinc\r
+19\tmain\r
+20\tinc\r
+21\tmain\r" "record function-call-history - show last 6 entries"
 
 # moving further should not work
 gdb_test "record function-call-history +" "At the end of the branch trace record\\." "record function-call-history - at the end (1)"
@@ -124,30 +124,30 @@ gdb_test "record function-call-history +" "At the end of the branch trace record
 
 # moving back showing the latest 15 function calls
 gdb_test "record function-call-history -" "
-6\tmain\r
-7\tinc\r
-8\tmain\r
-9\tinc\r
-10\tmain\r
-11\tinc\r
-12\tmain\r
-13\tinc\r
-14\tmain\r
-15\tinc\r
-16\tmain\r
-17\tinc\r
-18\tmain\r
-19\tinc\r
-20\tmain\r" "record function-call-history - show last 15 entries"
+7\tmain\r
+8\tinc\r
+9\tmain\r
+10\tinc\r
+11\tmain\r
+12\tinc\r
+13\tmain\r
+14\tinc\r
+15\tmain\r
+16\tinc\r
+17\tmain\r
+18\tinc\r
+19\tmain\r
+20\tinc\r
+21\tmain\r" "record function-call-history - show last 15 entries"
 
 # moving further back shows the 6 first function calls
 gdb_test "record function-call-history -" "
-0\tmain\r
-1\tinc\r
-2\tmain\r
-3\tinc\r
-4\tmain\r
-5\tinc\r" "record function-call-history - show first 6 entries"
+1\tmain\r
+2\tinc\r
+3\tmain\r
+4\tinc\r
+5\tmain\r
+6\tinc\r" "record function-call-history - show first 6 entries"
 
 # moving further back shouldn't work
 gdb_test "record function-call-history -" "At the start of the branch trace record\\." "record function-call-history - at the start (1)"
@@ -186,18 +186,18 @@ gdb_test "record function-call-history /l +" "
 gdb_test "record function-call-history /l +" "At the end of the branch trace record\\." "record function-call-history /l - at the end (1)"
 gdb_test "record function-call-history /l" "At the end of the branch trace record\\." "record function-call-history /l - at the end (2)"
 
-set expected_range "3\tinc\r
-4\tmain\r
-5\tinc\r
-6\tmain\r
-7\tinc\r
-8\tmain\r
-9\tinc\r"
+set expected_range "4\tinc\r
+5\tmain\r
+6\tinc\r
+7\tmain\r
+8\tinc\r
+9\tmain\r
+10\tinc\r"
 
 # show functions in instruction range
-gdb_test "record function-call-history 3,10" $expected_range "absolute instruction range"
-gdb_test "record function-call-history 3,+7" $expected_range "relative positive instruction range"
-gdb_test "record function-call-history 10,-7" $expected_range "relative negative instruction range"
+gdb_test "record function-call-history 4,11" $expected_range "absolute instruction range"
+gdb_test "record function-call-history 4,+7" $expected_range "relative positive instruction range"
+gdb_test "record function-call-history 11,-7" $expected_range "relative negative instruction range"
 
 # set bp after fib recursion and continue
 set bp_location [gdb_get_line_number "bp.2" $testfile.c]
@@ -208,8 +208,7 @@ gdb_continue_to_breakpoint "cont to $bp_location" ".*$testfile.c:$bp_location.*"
 # so we limit the output to only show the latest 11 function calls
 gdb_test_no_output "set record function-call-history-size 11"
 gdb_test "record function-call-history" "
-20\tmain\r
-21\tfib\r
+21\tmain\r
 22\tfib\r
 23\tfib\r
 24\tfib\r
@@ -218,4 +217,5 @@ gdb_test "record function-call-history" "
 27\tfib\r
 28\tfib\r
 29\tfib\r
-30\tmain" "show recursive function call history"
+30\tfib\r
+31\tmain" "show recursive function call history"
diff --git a/gdb/testsuite/gdb.btrace/instruction_history.exp b/gdb/testsuite/gdb.btrace/instruction_history.exp
index bd25404..df2728b 100644
--- a/gdb/testsuite/gdb.btrace/instruction_history.exp
+++ b/gdb/testsuite/gdb.btrace/instruction_history.exp
@@ -65,33 +65,33 @@ if { $traced != 6 } {
 }
 
 # test that we see the expected instructions
-gdb_test "record instruction-history 1,6" "
-1\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
-2\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tdec    %eax\r
-3\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tjmp    0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
-4\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tcmp    \\\$0x0,%eax\r
-5\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r"
-
-gdb_test "record instruction-history /f 1,+5" "
-1\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
-2\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tdec    %eax\r
-3\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tjmp    0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
-4\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tcmp    \\\$0x0,%eax\r
-5\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r"
-
-gdb_test "record instruction-history /p 6,-5" "
-1\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
-2\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tdec    %eax\r
-3\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tjmp    0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
-4\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tcmp    \\\$0x0,%eax\r
-5\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r"
-
-gdb_test "record instruction-history /pf 1,6" "
-1\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
-2\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tdec    %eax\r
-3\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tjmp    0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
-4\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tcmp    \\\$0x0,%eax\r
-5\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r"
+gdb_test "record instruction-history 2,7" "
+2\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
+3\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tdec    %eax\r
+4\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tjmp    0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
+5\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tcmp    \\\$0x0,%eax\r
+6\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r"
+
+gdb_test "record instruction-history /f 2,+5" "
+2\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
+3\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tdec    %eax\r
+4\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tjmp    0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
+5\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tcmp    \\\$0x0,%eax\r
+6\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r"
+
+gdb_test "record instruction-history /p 7,-5" "
+2\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
+3\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tdec    %eax\r
+4\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tjmp    0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
+5\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tcmp    \\\$0x0,%eax\r
+6\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r"
+
+gdb_test "record instruction-history /pf 2,7" "
+2\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
+3\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tdec    %eax\r
+4\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tjmp    0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
+5\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tcmp    \\\$0x0,%eax\r
+6\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r"
 
 # the following tests are checking the iterators
 # to avoid lots of regexps, we just check the number of lines that
@@ -117,7 +117,7 @@ proc test_lines_length { command message } {
 # all $traced instructions
 gdb_test_no_output "set record instruction-history-size 0"
 set message "record instruction-history - unlimited"
-set lines [test_lines_length "record instruction-history 0" $message]
+set lines [test_lines_length "record instruction-history 1" $message]
 if { $traced != $lines } {
     fail $message
 } else {
@@ -126,7 +126,7 @@ if { $traced != $lines } {
 
 gdb_test_no_output "set record instruction-history-size $traced"
 set message "record instruction-history - traced"
-set lines [test_lines_length "record instruction-history 0" $message]
+set lines [test_lines_length "record instruction-history 1" $message]
 if { $traced != $lines } {
     fail $message
 } else {
@@ -137,7 +137,7 @@ if { $traced != $lines } {
 set history_size 3
 gdb_test_no_output "set record instruction-history-size $history_size"
 set message "browse history forward start"
-set lines [test_lines_length "record instruction-history 0" $message]
+set lines [test_lines_length "record instruction-history 1" $message]
 if { $lines != $history_size } {
     fail $message
 } else {
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 02/24] record: upcase record_print_flag enumeration constants
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
                   ` (8 preceding siblings ...)
  2013-07-03  9:14 ` [patch v4 11/24] record-btrace: supply register target methods Markus Metzger
@ 2013-07-03  9:14 ` Markus Metzger
  2013-08-18 19:11   ` Jan Kratochvil
  2013-07-03  9:14 ` [patch v4 19/24] btrace, linux: fix memory leak when reading branch trace Markus Metzger
                   ` (14 subsequent siblings)
  24 siblings, 1 reply; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:14 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches

2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>

	* record.h (record_print_flag) <record_print_src_line,
	record_print_insn_range>: Rename into ...
	(record_print_flag) <record_print_src_line,
	record_print_insn_range>: ... this.  Update all users.


---
 gdb/record-btrace.c |    4 ++--
 gdb/record.c        |    4 ++--
 gdb/record.h        |    4 ++--
 3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
index 8fb413e..68f40c8 100644
--- a/gdb/record-btrace.c
+++ b/gdb/record-btrace.c
@@ -469,13 +469,13 @@ btrace_func_history (struct btrace_thread_info *btinfo, struct ui_out *uiout,
       ui_out_field_uint (uiout, "index", idx);
       ui_out_text (uiout, "\t");
 
-      if ((flags & record_print_insn_range) != 0)
+      if ((flags & RECORD_PRINT_INSN_RANGE) != 0)
 	{
 	  btrace_func_history_insn_range (uiout, bfun);
 	  ui_out_text (uiout, "\t");
 	}
 
-      if ((flags & record_print_src_line) != 0)
+      if ((flags & RECORD_PRINT_SRC_LINE) != 0)
 	{
 	  btrace_func_history_src_line (uiout, bfun);
 	  ui_out_text (uiout, "\t");
diff --git a/gdb/record.c b/gdb/record.c
index cbbe365..07b1b97 100644
--- a/gdb/record.c
+++ b/gdb/record.c
@@ -570,10 +570,10 @@ get_call_history_modifiers (char **arg)
 	  switch (*args)
 	    {
 	    case 'l':
-	      modifiers |= record_print_src_line;
+	      modifiers |= RECORD_PRINT_SRC_LINE;
 	      break;
 	    case 'i':
-	      modifiers |= record_print_insn_range;
+	      modifiers |= RECORD_PRINT_INSN_RANGE;
 	      break;
 	    default:
 	      error (_("Invalid modifier: %c."), *args);
diff --git a/gdb/record.h b/gdb/record.h
index 86e6bc6..65d508f 100644
--- a/gdb/record.h
+++ b/gdb/record.h
@@ -36,10 +36,10 @@ extern struct cmd_list_element *info_record_cmdlist;
 enum record_print_flag
 {
   /* Print the source file and line (if applicable).  */
-  record_print_src_line = (1 << 0),
+  RECORD_PRINT_SRC_LINE = (1 << 0),
 
   /* Print the instruction number range (if applicable).  */
-  record_print_insn_range = (1 << 1),
+  RECORD_PRINT_INSN_RANGE = (1 << 1),
 };
 
 /* Wrapper for target_read_memory that prints a debug message if
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 10/24] target: add ops parameter to to_prepare_to_store method
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
                   ` (2 preceding siblings ...)
  2013-07-03  9:14 ` [patch v4 20/24] btrace, gdbserver: read branch trace incrementally Markus Metzger
@ 2013-07-03  9:14 ` Markus Metzger
  2013-08-18 19:07   ` Jan Kratochvil
  2013-07-03  9:14 ` [patch v4 14/24] record-btrace: provide xfer_partial target method Markus Metzger
                   ` (20 subsequent siblings)
  24 siblings, 1 reply; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:14 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches

To allow forwarding the prepare_to_store request to the target beneath,
add a target_ops * parameter.

2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>

	* target.h (target_ops) <to_prepare_to_store>: Add parameter.
	(target_prepare_to_store): Remove macro.  New function.
	* target.c (update_current_target): Do not inherit/default
	prepare_to_store.
	(target_prepare_to_store): New.
	(debug_to_prepare_to_store): Remove.
	* remote.c (remote_prepare_to_store): Add parameter.
	* remote-mips.c (mips_prepare_to_store): Add parameter.
	* remote-m32r-sdi.c (m32r_prepare_to_store): Add parameter.
	* ravenscar-thread.c (ravenscar_prepare_to_store): Add
	parameter.
	* monitor.c (monitor_prepare_to_store): Add parameter.
	* inf-child.c (inf_child_prepare_to_store): Add parameter.


---
 gdb/inf-child.c        |    2 +-
 gdb/monitor.c          |    2 +-
 gdb/ravenscar-thread.c |    7 ++++---
 gdb/record-full.c      |    3 ++-
 gdb/remote-m32r-sdi.c  |    2 +-
 gdb/remote-mips.c      |    5 +++--
 gdb/remote.c           |    5 +++--
 gdb/target.c           |   36 +++++++++++++++++++++---------------
 gdb/target.h           |    5 ++---
 9 files changed, 38 insertions(+), 29 deletions(-)

diff --git a/gdb/inf-child.c b/gdb/inf-child.c
index f5992bb..3be4315 100644
--- a/gdb/inf-child.c
+++ b/gdb/inf-child.c
@@ -100,7 +100,7 @@ inf_child_post_attach (int pid)
    program being debugged.  */
 
 static void
-inf_child_prepare_to_store (struct regcache *regcache)
+inf_child_prepare_to_store (struct target_ops *ops, struct regcache *regcache)
 {
 }
 
diff --git a/gdb/monitor.c b/gdb/monitor.c
index beca4e4..8b1059c 100644
--- a/gdb/monitor.c
+++ b/gdb/monitor.c
@@ -1427,7 +1427,7 @@ monitor_store_registers (struct target_ops *ops,
    debugged.  */
 
 static void
-monitor_prepare_to_store (struct regcache *regcache)
+monitor_prepare_to_store (struct target_ops *ops, struct regcache *regcache)
 {
   /* Do nothing, since we can store individual regs.  */
 }
diff --git a/gdb/ravenscar-thread.c b/gdb/ravenscar-thread.c
index 0a3100d..adcd3a2 100644
--- a/gdb/ravenscar-thread.c
+++ b/gdb/ravenscar-thread.c
@@ -62,7 +62,8 @@ static void ravenscar_fetch_registers (struct target_ops *ops,
                                        struct regcache *regcache, int regnum);
 static void ravenscar_store_registers (struct target_ops *ops,
                                        struct regcache *regcache, int regnum);
-static void ravenscar_prepare_to_store (struct regcache *regcache);
+static void ravenscar_prepare_to_store (struct target_ops *ops,
+					struct regcache *regcache);
 static void ravenscar_resume (struct target_ops *ops, ptid_t ptid, int step,
 			      enum gdb_signal siggnal);
 static void ravenscar_mourn_inferior (struct target_ops *ops);
@@ -303,14 +304,14 @@ ravenscar_store_registers (struct target_ops *ops,
 }
 
 static void
-ravenscar_prepare_to_store (struct regcache *regcache)
+ravenscar_prepare_to_store (struct target_ops *ops, struct regcache *regcache)
 {
   struct target_ops *beneath = find_target_beneath (&ravenscar_ops);
 
   if (!ravenscar_runtime_initialized ()
       || ptid_equal (inferior_ptid, base_magic_null_ptid)
       || ptid_equal (inferior_ptid, ravenscar_running_thread ()))
-    beneath->to_prepare_to_store (regcache);
+    beneath->to_prepare_to_store (beneath, regcache);
   else
     {
       struct gdbarch *gdbarch = get_regcache_arch (regcache);
diff --git a/gdb/record-full.c b/gdb/record-full.c
index 3a8d326..058da8a 100644
--- a/gdb/record-full.c
+++ b/gdb/record-full.c
@@ -2148,7 +2148,8 @@ record_full_core_fetch_registers (struct target_ops *ops,
 /* "to_prepare_to_store" method for prec over corefile.  */
 
 static void
-record_full_core_prepare_to_store (struct regcache *regcache)
+record_full_core_prepare_to_store (struct target_ops *ops,
+				   struct regcache *regcache)
 {
 }
 
diff --git a/gdb/remote-m32r-sdi.c b/gdb/remote-m32r-sdi.c
index 2f910e6..1955ec1 100644
--- a/gdb/remote-m32r-sdi.c
+++ b/gdb/remote-m32r-sdi.c
@@ -1013,7 +1013,7 @@ m32r_store_register (struct target_ops *ops,
    debugged.  */
 
 static void
-m32r_prepare_to_store (struct regcache *regcache)
+m32r_prepare_to_store (struct target_ops *target, struct regcache *regcache)
 {
   /* Do nothing, since we can store individual regs.  */
   if (remote_debug)
diff --git a/gdb/remote-mips.c b/gdb/remote-mips.c
index 1619622..5aa57f1 100644
--- a/gdb/remote-mips.c
+++ b/gdb/remote-mips.c
@@ -92,7 +92,8 @@ static int mips_map_regno (struct gdbarch *, int);
 
 static void mips_set_register (int regno, ULONGEST value);
 
-static void mips_prepare_to_store (struct regcache *regcache);
+static void mips_prepare_to_store (struct target_ops *ops,
+				   struct regcache *regcache);
 
 static int mips_fetch_word (CORE_ADDR addr, unsigned int *valp);
 
@@ -2069,7 +2070,7 @@ mips_fetch_registers (struct target_ops *ops,
    registers, so this function doesn't have to do anything.  */
 
 static void
-mips_prepare_to_store (struct regcache *regcache)
+mips_prepare_to_store (struct target_ops *ops, struct regcache *regcache)
 {
 }
 
diff --git a/gdb/remote.c b/gdb/remote.c
index 1d6ac90..b352ca6 100644
--- a/gdb/remote.c
+++ b/gdb/remote.c
@@ -101,7 +101,8 @@ static void async_remote_interrupt_twice (gdb_client_data);
 
 static void remote_files_info (struct target_ops *ignore);
 
-static void remote_prepare_to_store (struct regcache *regcache);
+static void remote_prepare_to_store (struct target_ops *ops,
+				     struct regcache *regcache);
 
 static void remote_open (char *name, int from_tty);
 
@@ -6348,7 +6349,7 @@ remote_fetch_registers (struct target_ops *ops,
    first.  */
 
 static void
-remote_prepare_to_store (struct regcache *regcache)
+remote_prepare_to_store (struct target_ops *ops, struct regcache *regcache)
 {
   struct remote_arch_state *rsa = get_remote_arch_state ();
   int i;
diff --git a/gdb/target.c b/gdb/target.c
index 920f916..ecffc9c 100644
--- a/gdb/target.c
+++ b/gdb/target.c
@@ -96,8 +96,6 @@ static struct target_ops debug_target;
 
 static void debug_to_open (char *, int);
 
-static void debug_to_prepare_to_store (struct regcache *);
-
 static void debug_to_files_info (struct target_ops *);
 
 static int debug_to_insert_breakpoint (struct gdbarch *,
@@ -623,7 +621,7 @@ update_current_target (void)
       /* Do not inherit to_wait.  */
       /* Do not inherit to_fetch_registers.  */
       /* Do not inherit to_store_registers.  */
-      INHERIT (to_prepare_to_store, t);
+      /* Do not inherit to_prepare_to_store.  */
       INHERIT (deprecated_xfer_memory, t);
       INHERIT (to_files_info, t);
       INHERIT (to_insert_breakpoint, t);
@@ -757,9 +755,6 @@ update_current_target (void)
   de_fault (to_post_attach,
 	    (void (*) (int))
 	    target_ignore);
-  de_fault (to_prepare_to_store,
-	    (void (*) (struct regcache *))
-	    noprocess);
   de_fault (deprecated_xfer_memory,
 	    (int (*) (CORE_ADDR, gdb_byte *, int, int,
 		      struct mem_attrib *, struct target_ops *))
@@ -4033,6 +4028,26 @@ target_store_registers (struct regcache *regcache, int regno)
   noprocess ();
 }
 
+/* See target.h.  */
+
+void
+target_prepare_to_store (struct regcache *regcache)
+{
+  struct target_ops *t;
+
+  for (t = current_target.beneath; t != NULL; t = t->beneath)
+    {
+      if (t->to_prepare_to_store != NULL)
+	{
+	  t->to_prepare_to_store (t, regcache);
+	  if (targetdebug)
+	    fprintf_unfiltered (gdb_stdlog, "target_prepare_to_store");
+
+	  return;
+	}
+    }
+}
+
 int
 target_core_of_thread (ptid_t ptid)
 {
@@ -4485,14 +4500,6 @@ target_call_history_range (ULONGEST begin, ULONGEST end, int flags)
   tcomplain ();
 }
 
-static void
-debug_to_prepare_to_store (struct regcache *regcache)
-{
-  debug_target.to_prepare_to_store (regcache);
-
-  fprintf_unfiltered (gdb_stdlog, "target_prepare_to_store ()\n");
-}
-
 static int
 deprecated_debug_xfer_memory (CORE_ADDR memaddr, bfd_byte *myaddr, int len,
 			      int write, struct mem_attrib *attrib,
@@ -4944,7 +4951,6 @@ setup_target_debug (void)
 
   current_target.to_open = debug_to_open;
   current_target.to_post_attach = debug_to_post_attach;
-  current_target.to_prepare_to_store = debug_to_prepare_to_store;
   current_target.deprecated_xfer_memory = deprecated_debug_xfer_memory;
   current_target.to_files_info = debug_to_files_info;
   current_target.to_insert_breakpoint = debug_to_insert_breakpoint;
diff --git a/gdb/target.h b/gdb/target.h
index 1bf716e..e890999 100644
--- a/gdb/target.h
+++ b/gdb/target.h
@@ -434,7 +434,7 @@ struct target_ops
 		       ptid_t, struct target_waitstatus *, int);
     void (*to_fetch_registers) (struct target_ops *, struct regcache *, int);
     void (*to_store_registers) (struct target_ops *, struct regcache *, int);
-    void (*to_prepare_to_store) (struct regcache *);
+    void (*to_prepare_to_store) (struct target_ops *, struct regcache *);
 
     /* Transfer LEN bytes of memory between GDB address MYADDR and
        target address MEMADDR.  If WRITE, transfer them to the target, else
@@ -1055,8 +1055,7 @@ extern void target_store_registers (struct regcache *regcache, int regs);
    that REGISTERS contains all the registers from the program being
    debugged.  */
 
-#define	target_prepare_to_store(regcache)	\
-     (*current_target.to_prepare_to_store) (regcache)
+extern void target_prepare_to_store (struct regcache *);
 
 /* Determine current address space of thread PTID.  */
 
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 14/24] record-btrace: provide xfer_partial target method
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
                   ` (3 preceding siblings ...)
  2013-07-03  9:14 ` [patch v4 10/24] target: add ops parameter to to_prepare_to_store method Markus Metzger
@ 2013-07-03  9:14 ` Markus Metzger
  2013-08-18 19:08   ` Jan Kratochvil
  2013-07-03  9:14 ` [patch v4 07/24] record-btrace: optionally indent function call history Markus Metzger
                   ` (19 subsequent siblings)
  24 siblings, 1 reply; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:14 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches

Provide the xfer_partial target method for the btrace record target.

Only allow memory accesses to readonly memory while we're replaying.

2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>

	* record-btrace.c (record_btrace_xfer_partial): New.
	(init_record_btrace_ops): Initialize xfer_partial.


---
 gdb/record-btrace.c |   58 +++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 58 insertions(+), 0 deletions(-)

diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
index cb1f3bb..831a367 100644
--- a/gdb/record-btrace.c
+++ b/gdb/record-btrace.c
@@ -754,6 +754,63 @@ record_btrace_is_replaying (void)
   return 0;
 }
 
+/* The to_xfer_partial method of target record-btrace.  */
+
+static LONGEST
+record_btrace_xfer_partial (struct target_ops *ops, enum target_object object,
+			    const char *annex, gdb_byte *readbuf,
+			    const gdb_byte *writebuf, ULONGEST offset,
+			    LONGEST len)
+{
+  struct target_ops *t;
+
+  /* Normalize the request so len is positive.  */
+  if (len < 0)
+    {
+      offset += len;
+      len = - len;
+    }
+
+  /* Filter out requests that don't make sense during replay.  */
+  if (record_btrace_is_replaying ())
+    {
+      switch (object)
+	{
+	case TARGET_OBJECT_MEMORY:
+	case TARGET_OBJECT_RAW_MEMORY:
+	case TARGET_OBJECT_STACK_MEMORY:
+	  {
+	    /* We allow reading readonly memory.  */
+	    struct target_section *section;
+
+	    section = target_section_by_addr (ops, offset);
+	    if (section != NULL)
+	      {
+		/* Check if the section we found is readonly.  */
+		if ((bfd_get_section_flags (section->bfd,
+					    section->the_bfd_section)
+		     & SEC_READONLY) != 0)
+		  {
+		    /* Truncate the request to fit into this section.  */
+		    len = min (len, section->endaddr - offset);
+		    break;
+		  }
+	      }
+
+	    return -1;
+	  }
+	}
+    }
+
+  /* Forward the request.  */
+  for (t = ops->beneath; t != NULL; t = t->beneath)
+    if (t->to_xfer_partial != NULL)
+      return t->to_xfer_partial (t, object, annex, readbuf, writebuf,
+				 offset, len);
+
+  return -1;
+}
+
 /* The to_fetch_registers method of target record-btrace.  */
 
 static void
@@ -936,6 +993,7 @@ init_record_btrace_ops (void)
   ops->to_call_history_from = record_btrace_call_history_from;
   ops->to_call_history_range = record_btrace_call_history_range;
   ops->to_record_is_replaying = record_btrace_is_replaying;
+  ops->to_xfer_partial = record_btrace_xfer_partial;
   ops->to_fetch_registers = record_btrace_fetch_registers;
   ops->to_store_registers = record_btrace_store_registers;
   ops->to_prepare_to_store = record_btrace_prepare_to_store;
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 11/24] record-btrace: supply register target methods
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
                   ` (7 preceding siblings ...)
  2013-07-03  9:14 ` [patch v4 16/24] record-btrace: provide target_find_new_threads method Markus Metzger
@ 2013-07-03  9:14 ` Markus Metzger
  2013-08-18 19:07   ` Jan Kratochvil
  2013-07-03  9:14 ` [patch v4 02/24] record: upcase record_print_flag enumeration constants Markus Metzger
                   ` (15 subsequent siblings)
  24 siblings, 1 reply; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:14 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches

Supply target methods to allow reading the PC.  Forbid anything else.

2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>

	* record-btrace.c (record_btrace_fetch_registers,
	record_btrace_store_registers,
	record_btrace_to_prepare_to_store): New.
	(init_record_btrace_ops): Add the above.


---
 gdb/record-btrace.c |   95 +++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 95 insertions(+), 0 deletions(-)

diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
index 5e41b20..e9c0801 100644
--- a/gdb/record-btrace.c
+++ b/gdb/record-btrace.c
@@ -32,6 +32,7 @@
 #include "ui-out.h"
 #include "symtab.h"
 #include "filenames.h"
+#include "regcache.h"
 
 /* The target_ops of record-btrace.  */
 static struct target_ops record_btrace_ops;
@@ -752,6 +753,97 @@ record_btrace_is_replaying (void)
   return 0;
 }
 
+/* The to_fetch_registers method of target record-btrace.  */
+
+static void
+record_btrace_fetch_registers (struct target_ops *ops,
+			       struct regcache *regcache, int regno)
+{
+  struct btrace_insn_iterator *replay;
+  struct thread_info *tp;
+
+  tp = find_thread_ptid (inferior_ptid);
+  if (tp == NULL)
+    return;
+
+  replay = tp->btrace.replay;
+  if (replay != NULL)
+    {
+      const struct btrace_insn *insn;
+      struct gdbarch *gdbarch;
+      int pcreg;
+
+      gdbarch = get_regcache_arch (regcache);
+      pcreg = gdbarch_pc_regnum (gdbarch);
+      if (pcreg < 0)
+	return;
+
+      /* We can only provide the PC register.  */
+      if (regno >= 0 && regno != pcreg)
+	return;
+
+      insn = btrace_insn_get (replay);
+      if (insn == NULL)
+	return;
+
+      regcache_raw_supply (regcache, regno, &insn->pc);
+    }
+  else
+    {
+      struct target_ops *t;
+
+      for (t = ops->beneath; t != NULL; t = t->beneath)
+	if (t->to_fetch_registers != NULL)
+	  {
+	    t->to_fetch_registers (t, regcache, regno);
+	    break;
+	  }
+    }
+}
+
+/* The to_store_registers method of target record-btrace.  */
+
+static void
+record_btrace_store_registers (struct target_ops *ops,
+			       struct regcache *regcache, int regno)
+{
+  struct target_ops *t;
+
+  if (record_btrace_is_replaying ())
+    return;
+
+  if (may_write_registers == 0)
+    error (_("Writing to registers is not allowed (regno %d)"), regno);
+
+  for (t = ops->beneath; t != NULL; t = t->beneath)
+    if (t->to_store_registers != NULL)
+      {
+	t->to_store_registers (t, regcache, regno);
+	return;
+      }
+
+  noprocess ();
+}
+
+/* The to_prepare_to_store method of target record-btrace.  */
+
+static void
+record_btrace_prepare_to_store (struct target_ops *ops,
+				struct regcache *regcache)
+{
+  struct target_ops *t;
+
+  if (record_btrace_is_replaying ())
+    return;
+
+  for (t = ops->beneath; t != NULL; t = t->beneath)
+    if (t->to_prepare_to_store != NULL)
+      {
+	t->to_prepare_to_store (t, regcache);
+	return;
+      }
+}
+
 /* Initialize the record-btrace target ops.  */
 
 static void
@@ -779,6 +871,9 @@ init_record_btrace_ops (void)
   ops->to_call_history_from = record_btrace_call_history_from;
   ops->to_call_history_range = record_btrace_call_history_range;
   ops->to_record_is_replaying = record_btrace_is_replaying;
+  ops->to_fetch_registers = record_btrace_fetch_registers;
+  ops->to_store_registers = record_btrace_store_registers;
+  ops->to_prepare_to_store = record_btrace_prepare_to_store;
   ops->to_stratum = record_stratum;
   ops->to_magic = OPS_MAGIC;
 }
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 07/24] record-btrace: optionally indent function call history
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
                   ` (4 preceding siblings ...)
  2013-07-03  9:14 ` [patch v4 14/24] record-btrace: provide xfer_partial target method Markus Metzger
@ 2013-07-03  9:14 ` Markus Metzger
  2013-08-18 19:06   ` Jan Kratochvil
  2013-07-03  9:14 ` [patch v4 08/24] record-btrace: make ranges include begin and end Markus Metzger
                   ` (18 subsequent siblings)
  24 siblings, 1 reply; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:14 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches, Christian Himpel

Add a new modifier /c to the "record function-call-history" command to
indent the function name based on its depth in the call stack.

Also reorder the optional fields to have the indentation at the very beginning.
Prefix the insn range (/i modifier) with "inst ".
Prefix the source line (/l modifier) with "at ".
Change the range syntax from "begin-end" to "begin,end" to allow copy&paste to
the "record instruction-history" and "list" commands.

Adjust the respective tests and add new tests for the /c modifier.

There is one known bug regarding indentation that results from the fact that we
have the current instruction already inside the branch trace.  When the current
instruction is the first (and only) instruction in a function on the outermost
level for which we have not seen the call, the indentation starts at level 1
with 2 leading spaces.

Reviewed-by: Eli Zaretskii  <eliz@gnu.org>
CC: Christian Himpel  <christian.himpel@intel.com>
2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>

    * record.h (enum record_print_flag)
    <record_print_indent_calls>: New.
    * record.c (get_call_history_modifiers): Recognize /c modifier.
    (_initialize_record): Document /c modifier.
    * record-btrace.c (btrace_call_history): Add btinfo parameter.
    Reorder fields.  Optionally indent the function name.  Update
    all users.
    * NEWS: Announce changes.

testsuite/
    * gdb.btrace/function_call_history.exp: Fix expected field
    order for "record function-call-history".
    Add new tests for "record function-call-history /c".
    * gdb.btrace/exception.cc: New.
    * gdb.btrace/exception.exp: New.
    * gdb.btrace/tailcall.exp: New.
    * gdb.btrace/x86-tailcall.S: New.
    * gdb.btrace/x86-tailcall.c: New.
    * gdb.btrace/unknown_functions.c: New.
    * gdb.btrace/unknown_functions.exp: New.
    * gdb.btrace/Makefile.in (EXECUTABLES): Add new.

doc/
    * gdb.texinfo (Process Record and Replay): Document new /c
    modifier accepted by "record function-call-history".


---
 gdb/NEWS                                           |    6 +
 gdb/doc/gdb.texinfo                                |   12 +-
 gdb/record-btrace.c                                |   33 ++-
 gdb/record.c                                       |    4 +
 gdb/record.h                                       |    3 +
 gdb/testsuite/gdb.btrace/Makefile.in               |    3 +-
 gdb/testsuite/gdb.btrace/exception.cc              |   56 ++++
 gdb/testsuite/gdb.btrace/exception.exp             |   65 +++++
 gdb/testsuite/gdb.btrace/function_call_history.exp |  112 +++++++--
 gdb/testsuite/gdb.btrace/tailcall.exp              |   49 ++++
 gdb/testsuite/gdb.btrace/unknown_functions.c       |   45 ++++
 gdb/testsuite/gdb.btrace/unknown_functions.exp     |   58 +++++
 gdb/testsuite/gdb.btrace/x86-tailcall.S            |  269 ++++++++++++++++++++
 gdb/testsuite/gdb.btrace/x86-tailcall.c            |   39 +++
 14 files changed, 716 insertions(+), 38 deletions(-)
 create mode 100644 gdb/testsuite/gdb.btrace/exception.cc
 create mode 100755 gdb/testsuite/gdb.btrace/exception.exp
 create mode 100644 gdb/testsuite/gdb.btrace/tailcall.exp
 create mode 100644 gdb/testsuite/gdb.btrace/unknown_functions.c
 create mode 100644 gdb/testsuite/gdb.btrace/unknown_functions.exp
 create mode 100644 gdb/testsuite/gdb.btrace/x86-tailcall.S
 create mode 100644 gdb/testsuite/gdb.btrace/x86-tailcall.c

diff --git a/gdb/NEWS b/gdb/NEWS
index e469f1e..6ac910a 100644
--- a/gdb/NEWS
+++ b/gdb/NEWS
@@ -13,6 +13,12 @@ Nios II ELF 			nios2*-*-elf
 Nios II GNU/Linux		nios2*-*-linux
 Texas Instruments MSP430	msp430*-*-elf
 
+* The command 'record function-call-history' supports a new modifier '/c' to
+  indent the function names based on their call stack depth.
+  The fields for the '/i' and '/l' modifier have been reordered.
+  The instruction range is now prefixed with 'insn'.
+  The source line range is now prefixed with 'at'.
+
 * New commands:
 catch rethrow
   Like "catch throw", but catches a re-thrown exception.
diff --git a/gdb/doc/gdb.texinfo b/gdb/doc/gdb.texinfo
index fae54e4..2cfc20b 100644
--- a/gdb/doc/gdb.texinfo
+++ b/gdb/doc/gdb.texinfo
@@ -6419,7 +6419,9 @@ line for each sequence of instructions that belong to the same
 function giving the name of that function, the source lines
 for this instruction sequence (if the @code{/l} modifier is
 specified), and the instructions numbers that form the sequence (if
-the @code{/i} modifier is specified).
+the @code{/i} modifier is specified).  The function names are indented
+to reflect the call stack depth if the @code{/c} modifier is
+specified.
 
 @smallexample
 (@value{GDBP}) @b{list 1, 10}
@@ -6433,10 +6435,10 @@ the @code{/i} modifier is specified).
 8     foo ();
 9     ...
 10  @}
-(@value{GDBP}) @b{record function-call-history /l}
-1  foo.c:6-8   bar
-2  foo.c:2-3   foo
-3  foo.c:9-10  bar
+(@value{GDBP}) @b{record function-call-history /lc}
+1  bar     at foo.c:6,8
+2    foo   at foo.c:2,3
+3  bar     at foo.c:9,10
 @end smallexample
 
 By default, ten lines are printed.  This can be changed using the
diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
index df69a41..99dc046 100644
--- a/gdb/record-btrace.c
+++ b/gdb/record-btrace.c
@@ -435,7 +435,7 @@ btrace_call_history_insn_range (struct ui_out *uiout,
   end = begin + size - 1;
 
   ui_out_field_uint (uiout, "insn begin", begin);
-  ui_out_text (uiout, "-");
+  ui_out_text (uiout, ",");
   ui_out_field_uint (uiout, "insn end", end);
 }
 
@@ -467,7 +467,7 @@ btrace_call_history_src_line (struct ui_out *uiout,
   if (end == begin)
     return;
 
-  ui_out_text (uiout, "-");
+  ui_out_text (uiout, ",");
   ui_out_field_int (uiout, "max line", end);
 }
 
@@ -475,6 +475,7 @@ btrace_call_history_src_line (struct ui_out *uiout,
 
 static void
 btrace_call_history (struct ui_out *uiout,
+		     const struct btrace_thread_info *btinfo,
 		     const struct btrace_call_iterator *begin,
 		     const struct btrace_call_iterator *end,
 		     enum record_print_flag flags)
@@ -498,23 +499,33 @@ btrace_call_history (struct ui_out *uiout,
       ui_out_field_uint (uiout, "index", bfun->number);
       ui_out_text (uiout, "\t");
 
+      if ((flags & RECORD_PRINT_INDENT_CALLS) != 0)
+	{
+	  int level = bfun->level + btinfo->level, i;
+
+	  for (i = 0; i < level; ++i)
+	    ui_out_text (uiout, "  ");
+	}
+
+      if (sym != NULL)
+	ui_out_field_string (uiout, "function", SYMBOL_PRINT_NAME (sym));
+      else if (msym != NULL)
+	ui_out_field_string (uiout, "function", SYMBOL_PRINT_NAME (msym));
+      else
+	ui_out_field_string (uiout, "function", "<unknown>");
+
       if ((flags & RECORD_PRINT_INSN_RANGE) != 0)
 	{
+	  ui_out_text (uiout, "\tinst ");
 	  btrace_call_history_insn_range (uiout, bfun);
-	  ui_out_text (uiout, "\t");
 	}
 
       if ((flags & RECORD_PRINT_SRC_LINE) != 0)
 	{
+	  ui_out_text (uiout, "\tat ");
 	  btrace_call_history_src_line (uiout, bfun);
-	  ui_out_text (uiout, "\t");
 	}
 
-      if (sym != NULL)
-	ui_out_field_string (uiout, "function", SYMBOL_PRINT_NAME (sym));
-      else if (msym != NULL)
-	ui_out_field_string (uiout, "function", SYMBOL_PRINT_NAME (msym));
-
       ui_out_text (uiout, "\n");
     }
 }
@@ -571,7 +582,7 @@ record_btrace_call_history (int size, int flags)
     }
 
   if (covered > 0)
-    btrace_call_history (uiout, &begin, &end, flags);
+    btrace_call_history (uiout, btinfo, &begin, &end, flags);
   else
     {
       if (size < 0)
@@ -623,7 +634,7 @@ record_btrace_call_history_range (ULONGEST from, ULONGEST to, int flags)
   if (found == 0)
     btrace_call_end (&end, btinfo);
 
-  btrace_call_history (uiout, &begin, &end, flags);
+  btrace_call_history (uiout, btinfo, &begin, &end, flags);
   btrace_set_call_history (btinfo, &begin, &end);
 
   do_cleanups (uiout_cleanup);
diff --git a/gdb/record.c b/gdb/record.c
index 07b1b97..ffe9810 100644
--- a/gdb/record.c
+++ b/gdb/record.c
@@ -575,6 +575,9 @@ get_call_history_modifiers (char **arg)
 	    case 'i':
 	      modifiers |= RECORD_PRINT_INSN_RANGE;
 	      break;
+	    case 'c':
+	      modifiers |= RECORD_PRINT_INDENT_CALLS;
+	      break;
 	    default:
 	      error (_("Invalid modifier: %c."), *args);
 	    }
@@ -809,6 +812,7 @@ function.\n\
 Without modifiers, it prints the function name.\n\
 With a /l modifier, the source file and line number range is included.\n\
 With a /i modifier, the instruction number range is included.\n\
+With a /c modifier, the output is indented based on the call stack depth.\n\
 With no argument, prints ten more lines after the previous ten-line print.\n\
 \"record function-call-history -\" prints ten lines before a previous ten-line \
 print.\n\
diff --git a/gdb/record.h b/gdb/record.h
index 65d508f..9acc7de 100644
--- a/gdb/record.h
+++ b/gdb/record.h
@@ -40,6 +40,9 @@ enum record_print_flag
 
   /* Print the instruction number range (if applicable).  */
   RECORD_PRINT_INSN_RANGE = (1 << 1),
+
+  /* Indent based on call stack depth (if applicable).  */
+  RECORD_PRINT_INDENT_CALLS = (1 << 2)
 };
 
 /* Wrapper for target_read_memory that prints a debug message if
diff --git a/gdb/testsuite/gdb.btrace/Makefile.in b/gdb/testsuite/gdb.btrace/Makefile.in
index f4c06d1..5c70700 100644
--- a/gdb/testsuite/gdb.btrace/Makefile.in
+++ b/gdb/testsuite/gdb.btrace/Makefile.in
@@ -1,7 +1,8 @@
 VPATH = @srcdir@
 srcdir = @srcdir@
 
-EXECUTABLES   = enable function_call_history instruction_history
+EXECUTABLES   = enable function_call_history instruction_history tailcall \
+  exception
 
 MISCELLANEOUS =
 
diff --git a/gdb/testsuite/gdb.btrace/exception.cc b/gdb/testsuite/gdb.btrace/exception.cc
new file mode 100644
index 0000000..029a4bc
--- /dev/null
+++ b/gdb/testsuite/gdb.btrace/exception.cc
@@ -0,0 +1,56 @@
+/* This testcase is part of GDB, the GNU debugger.
+
+   Copyright 2013 Free Software Foundation, Inc.
+
+   Contributed by Intel Corp. <markus.t.metzger@intel.com>
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+static void
+bad (void)
+{
+  throw 42;
+}
+
+static void
+bar (void)
+{
+  bad ();
+}
+
+static void
+foo (void)
+{
+  bar ();
+}
+
+static void
+test (void)
+{
+  try
+    {
+      foo ();
+    }
+  catch (...)
+    {
+    }
+}
+
+int
+main (void)
+{
+  test ();
+  test (); /* bp.1  */
+  return 0; /* bp.2  */
+}
diff --git a/gdb/testsuite/gdb.btrace/exception.exp b/gdb/testsuite/gdb.btrace/exception.exp
new file mode 100755
index 0000000..77a07fd
--- /dev/null
+++ b/gdb/testsuite/gdb.btrace/exception.exp
@@ -0,0 +1,65 @@
+# This testcase is part of GDB, the GNU debugger.
+#
+# Copyright 2013 Free Software Foundation, Inc.
+#
+# Contributed by Intel Corp. <markus.t.metzger@intel.com>
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# check for btrace support
+if { [skip_btrace_tests] } { return -1 }
+
+# start inferior
+standard_testfile exception.cc
+if [prepare_for_testing $testfile.exp $testfile $srcfile {c++ debug}] {
+    return -1
+}
+if ![runto_main] {
+    return -1
+}
+
+# we want to see the full trace for this test
+gdb_test_no_output "set record function-call-history-size 0"
+
+# set bp
+set bp_1 [gdb_get_line_number "bp.1" $srcfile]
+set bp_2 [gdb_get_line_number "bp.2" $srcfile]
+gdb_breakpoint $bp_1
+gdb_breakpoint $bp_2
+
+# trace the code between thw two breakpoints
+gdb_continue_to_breakpoint "cont to $bp_1" ".*$srcfile:$bp_1.*"
+gdb_test_no_output "record btrace"
+gdb_continue_to_breakpoint "cont to $bp_2" ".*$srcfile:$bp_2.*"
+
+# show the flat branch trace
+send_gdb "record function-call-history 1\n"
+gdb_expect_list "exception - flat" "\r\n$gdb_prompt $" {"\r
+1\ttest\\(\\)\r
+2\tfoo\\(\\)\r
+3\tbar\\(\\)\r
+4\tbad\\(\\)\r" "\r
+\[0-9\]*\ttest\\(\\)"}
+
+# show the branch trace with calls indented
+#
+# here we see a known bug that the indentation starts at level 1 with
+# two leading spaces instead of level 0 without leading spaces.
+send_gdb "record function-call-history /c 1\n"
+gdb_expect_list "exception - calls indented" "\r\n$gdb_prompt $" {"\r
+1\t  test\\(\\)\r
+2\t    foo\\(\\)\r
+3\t      bar\\(\\)\r
+4\t        bad\\(\\)\r" "\r
+\[0-9\]*\t  test\\(\\)"}
diff --git a/gdb/testsuite/gdb.btrace/function_call_history.exp b/gdb/testsuite/gdb.btrace/function_call_history.exp
index d694d5c..754cbbe 100644
--- a/gdb/testsuite/gdb.btrace/function_call_history.exp
+++ b/gdb/testsuite/gdb.btrace/function_call_history.exp
@@ -62,6 +62,30 @@ gdb_test "record function-call-history" "
 20\tinc\r
 21\tmain\r" "record function-call-history - with size unlimited"
 
+# show indented function call history with unlimited size
+gdb_test "record function-call-history /c 1" "
+1\tmain\r
+2\t  inc\r
+3\tmain\r
+4\t  inc\r
+5\tmain\r
+6\t  inc\r
+7\tmain\r
+8\t  inc\r
+9\tmain\r
+10\t  inc\r
+11\tmain\r
+12\t  inc\r
+13\tmain\r
+14\t  inc\r
+15\tmain\r
+16\t  inc\r
+17\tmain\r
+18\t  inc\r
+19\tmain\r
+20\t  inc\r
+21\tmain\r" "indented record function-call-history - with size unlimited"
+
 # show function call history with size of 21, we expect to see all 21 entries
 gdb_test_no_output "set record function-call-history-size 21"
 # show function call history
@@ -155,32 +179,35 @@ gdb_test "record function-call-history -" "At the start of the branch trace reco
 # make sure we cannot move any further back
 gdb_test "record function-call-history -" "At the start of the branch trace record\\." "record function-call-history - at the start (2)"
 
+# don't mess around with path names
+gdb_test_no_output "set filename-display basename"
+
 # moving forward again, but this time with file and line number, expected to see the first 15 entries
 gdb_test "record function-call-history /l +" "
-.*$srcfile:40-41\tmain\r
-.*$srcfile:22-24\tinc\r
-.*$srcfile:40-41\tmain\r
-.*$srcfile:22-24\tinc\r
-.*$srcfile:40-41\tmain\r
-.*$srcfile:22-24\tinc\r
-.*$srcfile:40-41\tmain\r
-.*$srcfile:22-24\tinc\r
-.*$srcfile:40-41\tmain\r
-.*$srcfile:22-24\tinc\r
-.*$srcfile:40-41\tmain\r
-.*$srcfile:22-24\tinc\r
-.*$srcfile:40-41\tmain\r
-.*$srcfile:22-24\tinc\r
-.*$srcfile:40-41\tmain\r" "record function-call-history /l - show first 15 entries"
+\[0-9\]*\tmain\tat $srcfile:40,41\r
+\[0-9\]*\tinc\tat $srcfile:22,24\r
+\[0-9\]*\tmain\tat $srcfile:40,41\r
+\[0-9\]*\tinc\tat $srcfile:22,24\r
+\[0-9\]*\tmain\tat $srcfile:40,41\r
+\[0-9\]*\tinc\tat $srcfile:22,24\r
+\[0-9\]*\tmain\tat $srcfile:40,41\r
+\[0-9\]*\tinc\tat $srcfile:22,24\r
+\[0-9\]*\tmain\tat $srcfile:40,41\r
+\[0-9\]*\tinc\tat $srcfile:22,24\r
+\[0-9\]*\tmain\tat $srcfile:40,41\r
+\[0-9\]*\tinc\tat $srcfile:22,24\r
+\[0-9\]*\tmain\tat $srcfile:40,41\r
+\[0-9\]*\tinc\tat $srcfile:22,24\r
+\[0-9\]*\tmain\tat $srcfile:40,41\r" "record function-call-history /l - show first 15 entries"
 
 # moving forward and expect to see the latest 6 entries
 gdb_test "record function-call-history /l +" "
-.*$srcfile:22-24\tinc\r
-.*$srcfile:40-41\tmain\r
-.*$srcfile:22-24\tinc\r
-.*$srcfile:40-41\tmain\r
-.*$srcfile:22-24\tinc\r
-.*$srcfile:40-43\tmain\r" "record function-call-history /l - show last 6 entries"
+\[0-9\]*\tinc\tat $srcfile:22,24\r
+\[0-9\]*\tmain\tat $srcfile:40,41\r
+\[0-9\]*\tinc\tat $srcfile:22,24\r
+\[0-9\]*\tmain\tat $srcfile:40,41\r
+\[0-9\]*\tinc\tat $srcfile:22,24\r
+\[0-9\]*\tmain\tat $srcfile:40,43\r" "record function-call-history /l - show last 6 entries"
 
 # moving further forward shouldn't work
 gdb_test "record function-call-history /l +" "At the end of the branch trace record\\." "record function-call-history /l - at the end (1)"
@@ -219,3 +246,46 @@ gdb_test "record function-call-history" "
 29\tfib\r
 30\tfib\r
 31\tmain" "show recursive function call history"
+
+# show indented function call history for fib
+gdb_test "record function-call-history /c 21, +11" "
+21\tmain\r
+22\t  fib\r
+23\t    fib\r
+24\t  fib\r
+25\t    fib\r
+26\t      fib\r
+27\t    fib\r
+28\t      fib\r
+29\t    fib\r
+30\t  fib\r
+31\tmain" "indented record function-call-history - fib"
+
+# make sure we can handle incomplete trace with respect to indentation
+if ![runto_main] {
+    return -1
+}
+# navigate to the fib in line 24 above
+gdb_breakpoint fib
+gdb_continue_to_breakpoint "cont to fib.1"
+gdb_continue_to_breakpoint "cont to fib.2"
+gdb_continue_to_breakpoint "cont to fib.3"
+gdb_continue_to_breakpoint "cont to fib.4"
+
+# start tracing
+gdb_test_no_output "record btrace"
+
+# continue until line 30 above
+delete_breakpoints
+set bp_location [gdb_get_line_number "bp.2" $testfile.c]
+gdb_breakpoint $bp_location
+gdb_continue_to_breakpoint "cont to $bp_location" ".*$testfile.c:$bp_location.*"
+
+# let's look at the trace. we expect to see the tail of the above listing.
+gdb_test "record function-call-history /c" "
+1\t      fib\r
+2\t    fib\r
+3\t      fib\r
+4\t    fib\r
+5\t  fib\r
+6\tmain" "indented record function-call-history - fib"
diff --git a/gdb/testsuite/gdb.btrace/tailcall.exp b/gdb/testsuite/gdb.btrace/tailcall.exp
new file mode 100644
index 0000000..cf9fdf3
--- /dev/null
+++ b/gdb/testsuite/gdb.btrace/tailcall.exp
@@ -0,0 +1,49 @@
+# This testcase is part of GDB, the GNU debugger.
+#
+# Copyright 2013 Free Software Foundation, Inc.
+#
+# Contributed by Intel Corp. <markus.t.metzger@intel.com>
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# check for btrace support
+if { [skip_btrace_tests] } { return -1 }
+
+# start inferior
+standard_testfile x86-tailcall.S
+if [prepare_for_testing tailcall.exp $testfile $srcfile {c++ debug}] {
+    return -1
+}
+if ![runto_main] {
+    return -1
+}
+
+# we want to see the full trace for this test
+gdb_test_no_output "set record function-call-history-size 0"
+
+# trace the call to foo
+gdb_test_no_output "record btrace"
+gdb_test "next"
+
+# show the flat branch trace
+gdb_test "record function-call-history 1" "
+1\tfoo\r
+2\tbar\r
+3\tmain" "tailcall - flat"
+
+# show the branch trace with calls indented
+gdb_test "record function-call-history /c 1" "
+1\t  foo\r
+2\t    bar\r
+3\tmain" "tailcall - calls indented"
diff --git a/gdb/testsuite/gdb.btrace/unknown_functions.c b/gdb/testsuite/gdb.btrace/unknown_functions.c
new file mode 100644
index 0000000..178c3e9
--- /dev/null
+++ b/gdb/testsuite/gdb.btrace/unknown_functions.c
@@ -0,0 +1,45 @@
+/* This testcase is part of GDB, the GNU debugger.
+
+   Copyright 2013 Free Software Foundation, Inc.
+
+   Contributed by Intel Corp. <markus.t.metzger@intel.com>
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+static int foo (void);
+
+int test (void)
+{
+  return foo ();
+}
+
+static int
+bar (void)
+{
+  return 42;
+}
+
+static int
+foo (void)
+{
+  return bar ();
+}
+
+int
+main (void)
+{
+  test ();
+  test ();
+  return 0;
+}
diff --git a/gdb/testsuite/gdb.btrace/unknown_functions.exp b/gdb/testsuite/gdb.btrace/unknown_functions.exp
new file mode 100644
index 0000000..c7f33bf
--- /dev/null
+++ b/gdb/testsuite/gdb.btrace/unknown_functions.exp
@@ -0,0 +1,58 @@
+# This testcase is part of GDB, the GNU debugger.
+#
+# Copyright 2013 Free Software Foundation, Inc.
+#
+# Contributed by Intel Corp. <markus.t.metzger@intel.com>
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# check for btrace support
+if { [skip_btrace_tests] } { return -1 }
+
+# start inferior
+standard_testfile
+
+# discard local symbols
+set ldflags "additional_flags=-Wl,-x"
+if [prepare_for_testing $testfile.exp $testfile $srcfile $ldflags] {
+    return -1
+}
+if ![runto test] {
+    return -1
+}
+
+# we want to see the full trace for this test
+gdb_test_no_output "set record function-call-history-size 0"
+
+# trace from one call of test to the next
+gdb_test_no_output "record btrace"
+gdb_continue_to_breakpoint "cont to test" ".*test.*"
+
+# show the flat branch trace
+gdb_test "record function-call-history 1" "
+1\t<unknown>\r
+2\t<unknown>\r
+3\t<unknown>\r
+4\ttest\r
+5\tmain\r
+6\ttest" "unknown - flat"
+
+# show the branch trace with calls indented
+gdb_test "record function-call-history /c 1" "
+1\t    <unknown>\r
+2\t      <unknown>\r
+3\t    <unknown>\r
+4\t  test\r
+5\tmain\r
+6\t  test" "unknown - calls indented"
diff --git a/gdb/testsuite/gdb.btrace/x86-tailcall.S b/gdb/testsuite/gdb.btrace/x86-tailcall.S
new file mode 100644
index 0000000..5a4fede
--- /dev/null
+++ b/gdb/testsuite/gdb.btrace/x86-tailcall.S
@@ -0,0 +1,269 @@
+/* This testcase is part of GDB, the GNU debugger.
+
+   Copyright 2013 Free Software Foundation, Inc.
+
+   Contributed by Intel Corp. <markus.t.metzger@intel.com>
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+
+   This file has been generated using:
+   gcc -S -O2 -g x86-tailcall.c -o x86-tailcall.S  */
+
+	.file	"x86-tailcall.c"
+	.section	.debug_abbrev,"",@progbits
+.Ldebug_abbrev0:
+	.section	.debug_info,"",@progbits
+.Ldebug_info0:
+	.section	.debug_line,"",@progbits
+.Ldebug_line0:
+	.text
+.Ltext0:
+	.p2align 4,,15
+	.type	bar, @function
+bar:
+.LFB0:
+	.file 1 "gdb/testsuite/gdb.btrace/x86-tailcall.c"
+	.loc 1 22 0
+	.cfi_startproc
+	.loc 1 24 0
+	movl	$42, %eax
+	ret
+	.cfi_endproc
+.LFE0:
+	.size	bar, .-bar
+	.p2align 4,,15
+	.type	foo, @function
+foo:
+.LFB1:
+	.loc 1 28 0
+	.cfi_startproc
+	.loc 1 29 0
+	jmp	bar
+	.cfi_endproc
+.LFE1:
+	.size	foo, .-foo
+	.p2align 4,,15
+.globl main
+	.type	main, @function
+main:
+.LFB2:
+	.loc 1 34 0
+	.cfi_startproc
+	.loc 1 37 0
+	call	foo
+.LVL0:
+	addl	$1, %eax
+.LVL1:
+	.loc 1 39 0
+	ret
+	.cfi_endproc
+.LFE2:
+	.size	main, .-main
+.Letext0:
+	.section	.debug_loc,"",@progbits
+.Ldebug_loc0:
+.LLST0:
+	.quad	.LVL0-.Ltext0
+	.quad	.LVL1-.Ltext0
+	.value	0x3
+	.byte	0x70
+	.sleb128 1
+	.byte	0x9f
+	.quad	.LVL1-.Ltext0
+	.quad	.LFE2-.Ltext0
+	.value	0x1
+	.byte	0x50
+	.quad	0x0
+	.quad	0x0
+	.section	.debug_info
+	.long	0x9c
+	.value	0x3
+	.long	.Ldebug_abbrev0
+	.byte	0x8
+	.uleb128 0x1
+	.long	.LASF0
+	.byte	0x1
+	.long	.LASF1
+	.long	.LASF2
+	.quad	.Ltext0
+	.quad	.Letext0
+	.long	.Ldebug_line0
+	.uleb128 0x2
+	.string	"bar"
+	.byte	0x1
+	.byte	0x15
+	.byte	0x1
+	.long	0x4b
+	.quad	.LFB0
+	.quad	.LFE0
+	.byte	0x1
+	.byte	0x9c
+	.uleb128 0x3
+	.byte	0x4
+	.byte	0x5
+	.string	"int"
+	.uleb128 0x2
+	.string	"foo"
+	.byte	0x1
+	.byte	0x1b
+	.byte	0x1
+	.long	0x4b
+	.quad	.LFB1
+	.quad	.LFE1
+	.byte	0x1
+	.byte	0x9c
+	.uleb128 0x4
+	.byte	0x1
+	.long	.LASF3
+	.byte	0x1
+	.byte	0x21
+	.byte	0x1
+	.long	0x4b
+	.quad	.LFB2
+	.quad	.LFE2
+	.byte	0x1
+	.byte	0x9c
+	.uleb128 0x5
+	.long	.LASF4
+	.byte	0x1
+	.byte	0x23
+	.long	0x4b
+	.long	.LLST0
+	.byte	0x0
+	.byte	0x0
+	.section	.debug_abbrev
+	.uleb128 0x1
+	.uleb128 0x11
+	.byte	0x1
+	.uleb128 0x25
+	.uleb128 0xe
+	.uleb128 0x13
+	.uleb128 0xb
+	.uleb128 0x3
+	.uleb128 0xe
+	.uleb128 0x1b
+	.uleb128 0xe
+	.uleb128 0x11
+	.uleb128 0x1
+	.uleb128 0x12
+	.uleb128 0x1
+	.uleb128 0x10
+	.uleb128 0x6
+	.byte	0x0
+	.byte	0x0
+	.uleb128 0x2
+	.uleb128 0x2e
+	.byte	0x0
+	.uleb128 0x3
+	.uleb128 0x8
+	.uleb128 0x3a
+	.uleb128 0xb
+	.uleb128 0x3b
+	.uleb128 0xb
+	.uleb128 0x27
+	.uleb128 0xc
+	.uleb128 0x49
+	.uleb128 0x13
+	.uleb128 0x11
+	.uleb128 0x1
+	.uleb128 0x12
+	.uleb128 0x1
+	.uleb128 0x40
+	.uleb128 0xa
+	.byte	0x0
+	.byte	0x0
+	.uleb128 0x3
+	.uleb128 0x24
+	.byte	0x0
+	.uleb128 0xb
+	.uleb128 0xb
+	.uleb128 0x3e
+	.uleb128 0xb
+	.uleb128 0x3
+	.uleb128 0x8
+	.byte	0x0
+	.byte	0x0
+	.uleb128 0x4
+	.uleb128 0x2e
+	.byte	0x1
+	.uleb128 0x3f
+	.uleb128 0xc
+	.uleb128 0x3
+	.uleb128 0xe
+	.uleb128 0x3a
+	.uleb128 0xb
+	.uleb128 0x3b
+	.uleb128 0xb
+	.uleb128 0x27
+	.uleb128 0xc
+	.uleb128 0x49
+	.uleb128 0x13
+	.uleb128 0x11
+	.uleb128 0x1
+	.uleb128 0x12
+	.uleb128 0x1
+	.uleb128 0x40
+	.uleb128 0xa
+	.byte	0x0
+	.byte	0x0
+	.uleb128 0x5
+	.uleb128 0x34
+	.byte	0x0
+	.uleb128 0x3
+	.uleb128 0xe
+	.uleb128 0x3a
+	.uleb128 0xb
+	.uleb128 0x3b
+	.uleb128 0xb
+	.uleb128 0x49
+	.uleb128 0x13
+	.uleb128 0x2
+	.uleb128 0x6
+	.byte	0x0
+	.byte	0x0
+	.byte	0x0
+	.section	.debug_pubnames,"",@progbits
+	.long	0x17
+	.value	0x2
+	.long	.Ldebug_info0
+	.long	0xa0
+	.long	0x70
+	.string	"main"
+	.long	0x0
+	.section	.debug_aranges,"",@progbits
+	.long	0x2c
+	.value	0x2
+	.long	.Ldebug_info0
+	.byte	0x8
+	.byte	0x0
+	.value	0x0
+	.value	0x0
+	.quad	.Ltext0
+	.quad	.Letext0-.Ltext0
+	.quad	0x0
+	.quad	0x0
+	.section	.debug_str,"MS",@progbits,1
+.LASF1:
+	.string	"gdb/testsuite/gdb.btrace/x86-tailcall.c"
+.LASF4:
+	.string	"answer"
+.LASF0:
+	.string	"GNU C 4.4.4 20100726 (Red Hat 4.4.4-13)"
+.LASF3:
+	.string	"main"
+.LASF2:
+	.string	"/users/mmetzger/gdb/gerrit/git"
+	.ident	"GCC: (GNU) 4.4.4 20100726 (Red Hat 4.4.4-13)"
+	.section	.note.GNU-stack,"",@progbits
diff --git a/gdb/testsuite/gdb.btrace/x86-tailcall.c b/gdb/testsuite/gdb.btrace/x86-tailcall.c
new file mode 100644
index 0000000..9e3b183
--- /dev/null
+++ b/gdb/testsuite/gdb.btrace/x86-tailcall.c
@@ -0,0 +1,39 @@
+/* This testcase is part of GDB, the GNU debugger.
+
+   Copyright 2013 Free Software Foundation, Inc.
+
+   Contributed by Intel Corp. <markus.t.metzger@intel.com>
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+static __attribute__ ((noinline)) int
+bar (void)
+{
+  return 42;
+}
+
+static __attribute__ ((noinline)) int
+foo (void)
+{
+  return bar ();
+}
+
+int
+main (void)
+{
+  int answer;
+
+  answer = foo ();
+  return ++answer;
+}
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 09/24] btrace: add replay position to btrace thread info
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
                   ` (11 preceding siblings ...)
  2013-07-03  9:14 ` [patch v4 03/24] btrace: change branch trace data structure Markus Metzger
@ 2013-07-03  9:14 ` Markus Metzger
  2013-08-18 19:07   ` Jan Kratochvil
  2013-07-03  9:14 ` [patch v4 22/24] infrun: reverse stepping from unknown functions Markus Metzger
                   ` (11 subsequent siblings)
  24 siblings, 1 reply; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:14 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches

Add a branch trace instruction iterator pointing to the current replay position
to the branch trace thread info struct.

Free the iterator when btrace is cleared.

Start at the replay position for the instruction and function-call histories.

2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>

    * btrace.h (replay) <replay>: New.
    (btrace_is_replaying): New.
    * btrace.c (btrace_clear): Free replay iterator.
    (btrace_is_replaying): New.
    * record-btrace.c (record_btrace_is_replaying): New.
    (record_btrace_info): Print insn number if replaying.
    (record_btrace_insn_history): Start at replay position.
    (record_btrace_call_history): Start at replay position.
    (init_record_btrace_ops): Init to_record_is_replaying.


---
 gdb/btrace.c        |   10 ++++++
 gdb/btrace.h        |    6 ++++
 gdb/record-btrace.c |   80 +++++++++++++++++++++++++++++++++++++++++++++-----
 3 files changed, 88 insertions(+), 8 deletions(-)

diff --git a/gdb/btrace.c b/gdb/btrace.c
index 006deaa..0bec2cf 100644
--- a/gdb/btrace.c
+++ b/gdb/btrace.c
@@ -771,9 +771,11 @@ btrace_clear (struct thread_info *tp)
 
   xfree (btinfo->insn_history);
   xfree (btinfo->call_history);
+  xfree (btinfo->replay);
 
   btinfo->insn_history = NULL;
   btinfo->call_history = NULL;
+  btinfo->replay = NULL;
 }
 
 /* See btrace.h.  */
@@ -1371,3 +1373,11 @@ btrace_set_call_history (struct btrace_thread_info *btinfo,
   btinfo->call_history->begin = *begin;
   btinfo->call_history->end = *end;
 }
+
+/* See btrace.h.  */
+
+int
+btrace_is_replaying (struct thread_info *tp)
+{
+  return tp->btrace.replay != NULL;
+}
diff --git a/gdb/btrace.h b/gdb/btrace.h
index a3322d2..5a5b297 100644
--- a/gdb/btrace.h
+++ b/gdb/btrace.h
@@ -181,6 +181,9 @@ struct btrace_thread_info
 
   /* The function call history iterator.  */
   struct btrace_call_history *call_history;
+
+  /* The current replay position.  NULL if not replaying.  */
+  struct btrace_insn_iterator *replay;
 };
 
 /* Enable branch tracing for a thread.  */
@@ -301,4 +304,7 @@ extern void btrace_set_call_history (struct btrace_thread_info *,
 				     const struct btrace_call_iterator *begin,
 				     const struct btrace_call_iterator *end);
 
+/* Determine if branch tracing is currently replaying TP.  */
+extern int btrace_is_replaying (struct thread_info *tp);
+
 #endif /* BTRACE_H */
diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
index c7d6e9f..5e41b20 100644
--- a/gdb/record-btrace.c
+++ b/gdb/record-btrace.c
@@ -237,6 +237,10 @@ record_btrace_info (void)
   printf_unfiltered (_("Recorded %u instructions in %u functions for thread "
 		       "%d (%s).\n"), insns, calls, tp->num,
 		     target_pid_to_str (tp->ptid));
+
+  if (btrace_is_replaying (tp))
+    printf_unfiltered (_("Replay in progress.  At instruction %u.\n"),
+		       btrace_insn_number (btinfo->replay));
 }
 
 /* Print an unsigned int.  */
@@ -301,13 +305,34 @@ record_btrace_insn_history (int size, int flags)
   history = btinfo->insn_history;
   if (history == NULL)
     {
-      /* No matter the direction, we start with the tail of the trace.  */
-      btrace_insn_end (&begin, btinfo);
-      end = begin;
+      struct btrace_insn_iterator *replay;
 
       DEBUG ("insn-history (0x%x): %d", flags, size);
 
-      covered = btrace_insn_prev (&begin, context);
+      /* If we're replaying, we start at the replay position.  Otherwise, we
+	 start at the tail of the trace.  */
+      replay = btinfo->replay;
+      if (replay != NULL)
+	begin = *replay;
+      else
+	btrace_insn_end (&begin, btinfo);
+
+      /* We start from here and expand in the requested direction.  Then we
+	 expand in the other direction, as well, to fill up any remaining
+	 context.  */
+      end = begin;
+      if (size < 0)
+	{
+	  /* We want the current position covered, as well.  */
+	  covered = btrace_insn_next (&end, 1);
+	  covered += btrace_insn_prev (&begin, context - covered);
+	  covered += btrace_insn_next (&end, context - covered);
+	}
+      else
+	{
+	  covered = btrace_insn_next (&end, context);
+	  covered += btrace_insn_prev (&begin, context - covered);
+	}
     }
   else
     {
@@ -562,13 +587,37 @@ record_btrace_call_history (int size, int flags)
   history = btinfo->call_history;
   if (history == NULL)
     {
-      /* No matter the direction, we start with the tail of the trace.  */
-      btrace_call_end (&begin, btinfo);
-      end = begin;
+      struct btrace_insn_iterator *replay;
 
       DEBUG ("call-history (0x%x): %d", flags, size);
 
-      covered = btrace_call_prev (&begin, context);
+      /* If we're replaying, we start at the replay position.  Otherwise, we
+	 start at the tail of the trace.  */
+      replay = btinfo->replay;
+      if (replay != NULL)
+	{
+	  begin.function = replay->function;
+	  begin.btinfo = btinfo;
+	}
+      else
+	btrace_call_end (&begin, btinfo);
+
+      /* We start from here and expand in the requested direction.  Then we
+	 expand in the other direction, as well, to fill up any remaining
+	 context.  */
+      end = begin;
+      if (size < 0)
+	{
+	  /* We want the current position covered, as well.  */
+	  covered = btrace_call_next (&end, 1);
+	  covered += btrace_call_prev (&begin, context - covered);
+	  covered += btrace_call_next (&end, context - covered);
+	}
+      else
+	{
+	  covered = btrace_call_next (&end, context);
+	  covered += btrace_call_prev (&begin, context- covered);
+	}
     }
   else
     {
@@ -689,6 +738,20 @@ record_btrace_call_history_from (ULONGEST from, int size, int flags)
   record_btrace_call_history_range (begin, end, flags);
 }
 
+/* The to_record_is_replaying method of target record-btrace.  */
+
+static int
+record_btrace_is_replaying (void)
+{
+  struct thread_info *tp;
+
+  ALL_THREADS (tp)
+    if (btrace_is_replaying (tp))
+      return 1;
+
+  return 0;
+}
+
 /* Initialize the record-btrace target ops.  */
 
 static void
@@ -715,6 +778,7 @@ init_record_btrace_ops (void)
   ops->to_call_history = record_btrace_call_history;
   ops->to_call_history_from = record_btrace_call_history_from;
   ops->to_call_history_range = record_btrace_call_history_range;
+  ops->to_record_is_replaying = record_btrace_is_replaying;
   ops->to_stratum = record_stratum;
   ops->to_magic = OPS_MAGIC;
 }
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 22/24] infrun: reverse stepping from unknown functions
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
                   ` (12 preceding siblings ...)
  2013-07-03  9:14 ` [patch v4 09/24] btrace: add replay position to btrace thread info Markus Metzger
@ 2013-07-03  9:14 ` Markus Metzger
  2013-08-18 19:09   ` Jan Kratochvil
  2013-07-03  9:15 ` [patch v4 13/24] record-btrace, frame: supply target-specific unwinder Markus Metzger
                   ` (10 subsequent siblings)
  24 siblings, 1 reply; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:14 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches

When reverse-stepping, only insert a resume breakpoint at ecs->stop_func_start
if the function start is known.  Otherwise, keep single-stepping.

2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>

	* infrun.c (handle_inferior_event): Check if we know the function
	start address.


---
 gdb/infrun.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gdb/infrun.c b/gdb/infrun.c
index dc1036d..bd44016 100644
--- a/gdb/infrun.c
+++ b/gdb/infrun.c
@@ -4939,7 +4939,7 @@ process_event_stop_test:
 		 or stepped back out of a signal handler to the first instruction
 		 of the function.  Just keep going, which will single-step back
 		 to the caller.  */
-	      if (ecs->stop_func_start != stop_pc)
+	      if (ecs->stop_func_start != stop_pc && ecs->stop_func_start != 0)
 		{
 		  struct symtab_and_line sr_sal;
 
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 21/24] record-btrace: show trace from enable location
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
                   ` (22 preceding siblings ...)
  2013-07-03  9:15 ` [patch v4 04/24] record-btrace: fix insn range in function call history Markus Metzger
@ 2013-07-03  9:15 ` Markus Metzger
  2013-08-18 19:10   ` instruction_history.exp unset variable [Re: [patch v4 21/24] record-btrace: show trace from enable location] Jan Kratochvil
  2013-08-18 19:16   ` [patch v4 21/24] record-btrace: show trace from enable location Jan Kratochvil
  2013-08-18 19:04 ` [patch v4 00/24] record-btrace: reverse Jan Kratochvil
  24 siblings, 2 replies; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:15 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches

The btrace record target shows the branch trace from the location of the first
branch destination.  This is the first trace record BTS records.

After adding incremental updates, we can now add a dummy record for the current
PC when we enable tracing so we show the trace from the location where branch
tracing has been enabled.

2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>

	* btrace.c: Include regcache.h.
	(btrace_add_pc): New.
	(btrace_enable): Call btrace_add_pc.
	(btrace_is_empty): New.
	(btrace_fetch): Return if replaying.
	* btrace.h (btrace_is_empty): New.
	* record-btrace.c (require_btrace, record_btrace_info): Call
	btrace_is_empty.

testsuite/
	* gdb.btrace/exception.exp: Update.
	* gdb.btrace/instruction_history.exp: Update.
	* gdb.btrace/record_goto.exp: Update.
	* gdb.btrace/tailcall.exp: Update.
	* gdb.btrace/unknown_functions.exp: Update.
	* gdb.btrace/delta.exp: New.


---
 gdb/btrace.c                                     |   57 ++++++++++++
 gdb/btrace.h                                     |    4 +
 gdb/record-btrace.c                              |   10 +--
 gdb/testsuite/gdb.btrace/delta.exp               |   63 +++++++++++++
 gdb/testsuite/gdb.btrace/exception.exp           |   18 ++--
 gdb/testsuite/gdb.btrace/instruction_history.exp |   72 +++++++--------
 gdb/testsuite/gdb.btrace/record_goto.exp         |  103 +++++++++++-----------
 gdb/testsuite/gdb.btrace/tailcall.exp            |   16 ++--
 gdb/testsuite/gdb.btrace/unknown_functions.exp   |   22 +++--
 9 files changed, 244 insertions(+), 121 deletions(-)
 create mode 100644 gdb/testsuite/gdb.btrace/delta.exp

diff --git a/gdb/btrace.c b/gdb/btrace.c
index 072e9d3..d5772af 100644
--- a/gdb/btrace.c
+++ b/gdb/btrace.c
@@ -30,6 +30,7 @@
 #include "source.h"
 #include "filenames.h"
 #include "xml-support.h"
+#include "regcache.h"
 
 /* Print a record debug message.  Use do ... while (0) to avoid ambiguities
    when used in if statements.  */
@@ -664,6 +665,32 @@ btrace_compute_ftrace (struct btrace_thread_info *btinfo,
   btinfo->level = -level;
 }
 
+/* Add an entry for the current PC.  */
+
+static void
+btrace_add_pc (struct thread_info *tp)
+{
+  VEC (btrace_block_s) *btrace;
+  struct btrace_block *block;
+  struct regcache *regcache;
+  struct cleanup *cleanup;
+  CORE_ADDR pc;
+
+  regcache = get_thread_regcache (tp->ptid);
+  pc = regcache_read_pc (regcache);
+
+  btrace = NULL;
+  cleanup = make_cleanup (VEC_cleanup (btrace_block_s), &btrace);
+
+  block = VEC_safe_push (btrace_block_s, btrace, NULL);
+  block->begin = pc;
+  block->end = pc;
+
+  btrace_compute_ftrace (&tp->btrace, btrace);
+
+  do_cleanups (cleanup);
+}
+
 /* See btrace.h.  */
 
 void
@@ -678,6 +705,11 @@ btrace_enable (struct thread_info *tp)
   DEBUG ("enable thread %d (%s)", tp->num, target_pid_to_str (tp->ptid));
 
   tp->btrace.target = target_enable_btrace (tp->ptid);
+
+  /* Add an entry for the current PC so we start tracing from where we
+     enabled it.  */
+  if (tp->btrace.target != NULL)
+    btrace_add_pc (tp);
 }
 
 /* See btrace.h.  */
@@ -811,6 +843,12 @@ btrace_fetch (struct thread_info *tp)
   if (tinfo == NULL)
     return;
 
+  /* There's no way we could get new trace while replaying.
+     On the other hand, delta trace would return a partial record with the
+     current PC, which is the replay PC, not the last PC, as expected.  */
+  if (btinfo->replay != NULL)
+    return;
+
   cleanup = make_cleanup (VEC_cleanup (btrace_block_s), &btrace);
 
   /* Let's first try to extend the trace we already have.  */
@@ -1487,3 +1525,22 @@ btrace_is_replaying (struct thread_info *tp)
 {
   return tp->btrace.replay != NULL;
 }
+
+/* See btrace.h.  */
+
+int
+btrace_is_empty (struct thread_info *tp)
+{
+  struct btrace_insn_iterator begin, end;
+  struct btrace_thread_info *btinfo;
+
+  btinfo = &tp->btrace;
+
+  if (btinfo->begin == NULL)
+    return 1;
+
+  btrace_insn_begin (&begin, btinfo);
+  btrace_insn_end (&end, btinfo);
+
+  return btrace_insn_cmp (&begin, &end) == 0;
+}
diff --git a/gdb/btrace.h b/gdb/btrace.h
index 5a5b297..04466d3 100644
--- a/gdb/btrace.h
+++ b/gdb/btrace.h
@@ -307,4 +307,8 @@ extern void btrace_set_call_history (struct btrace_thread_info *,
 /* Determine if branch tracing is currently replaying TP.  */
 extern int btrace_is_replaying (struct thread_info *tp);
 
+/* Return non-zero if the branch trace for TP is empty; zero otherwise.  */
+extern int btrace_is_empty (struct thread_info *tp);
+
+
 #endif /* BTRACE_H */
diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
index a528f8b..14dbcd2 100644
--- a/gdb/record-btrace.c
+++ b/gdb/record-btrace.c
@@ -65,7 +65,6 @@ static struct btrace_thread_info *
 require_btrace (void)
 {
   struct thread_info *tp;
-  struct btrace_thread_info *btinfo;
 
   DEBUG ("require");
 
@@ -75,12 +74,10 @@ require_btrace (void)
 
   btrace_fetch (tp);
 
-  btinfo = &tp->btrace;
-
-  if (btinfo->begin == NULL)
+  if (btrace_is_empty (tp))
     error (_("No trace."));
 
-  return btinfo;
+  return &tp->btrace;
 }
 
 /* Enable branch tracing for one thread.  Warn on errors.  */
@@ -223,7 +220,8 @@ record_btrace_info (void)
   calls = 0;
 
   btinfo = &tp->btrace;
-  if (btinfo->begin != NULL)
+
+  if (!btrace_is_empty (tp))
     {
       struct btrace_call_iterator call;
       struct btrace_insn_iterator insn;
diff --git a/gdb/testsuite/gdb.btrace/delta.exp b/gdb/testsuite/gdb.btrace/delta.exp
new file mode 100644
index 0000000..9ee2629
--- /dev/null
+++ b/gdb/testsuite/gdb.btrace/delta.exp
@@ -0,0 +1,63 @@
+# This testcase is part of GDB, the GNU debugger.
+#
+# Copyright 2013 Free Software Foundation, Inc.
+#
+# Contributed by Intel Corp. <markus.t.metzger@intel.com>
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# check for btrace support
+if { [skip_btrace_tests] } { return -1 }
+
+# start inferior
+standard_testfile x86-record_goto.S
+if [prepare_for_testing delta.exp $testfile $srcfile] {
+    return -1
+}
+if ![runto_main] {
+    return -1
+}
+
+# proceed to some sequential code
+gdb_test "next"
+
+# start tracing
+gdb_test_no_output "record btrace"
+
+# we start without trace
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 0 instructions in 0 functions for .*" "delta, 1.1"
+gdb_test "record instruction-history" "No trace\." "delta, 1.2"
+gdb_test "record function-call-history" "No trace\." "delta, 1.3"
+
+# we record each single-step, even if we have not seen a branch, yet.
+gdb_test "stepi"
+
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 1 instructions in 1 functions for .*" "delta, 3.1"
+gdb_test "record instruction-history /f 1" "
+1\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tmov *\\\$0x0,%eax\r" "delta, 3.2"
+gdb_test "record function-call-history /c 1" "
+1\tmain\r" "delta, 3.3"
+
+# make sure we don't extend the trace when we ask again.
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 1 instructions in 1 functions for .*" "delta, 4.1"
+gdb_test "record instruction-history /f 1" "
+1\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tmov *\\\$0x0,%eax\r" "delta, 4.2"
+gdb_test "record function-call-history /c 1" "
+1\tmain\r" "delta, 4.3"
diff --git a/gdb/testsuite/gdb.btrace/exception.exp b/gdb/testsuite/gdb.btrace/exception.exp
index 77a07fd..a36f93d 100755
--- a/gdb/testsuite/gdb.btrace/exception.exp
+++ b/gdb/testsuite/gdb.btrace/exception.exp
@@ -46,10 +46,11 @@ gdb_continue_to_breakpoint "cont to $bp_2" ".*$srcfile:$bp_2.*"
 # show the flat branch trace
 send_gdb "record function-call-history 1\n"
 gdb_expect_list "exception - flat" "\r\n$gdb_prompt $" {"\r
-1\ttest\\(\\)\r
-2\tfoo\\(\\)\r
-3\tbar\\(\\)\r
-4\tbad\\(\\)\r" "\r
+1\tmain\\(\\)\r
+2\ttest\\(\\)\r
+3\tfoo\\(\\)\r
+4\tbar\\(\\)\r
+5\tbad\\(\\)\r" "\r
 \[0-9\]*\ttest\\(\\)"}
 
 # show the branch trace with calls indented
@@ -58,8 +59,9 @@ gdb_expect_list "exception - flat" "\r\n$gdb_prompt $" {"\r
 # two leading spaces instead of level 0 without leading spaces.
 send_gdb "record function-call-history /c 1\n"
 gdb_expect_list "exception - calls indented" "\r\n$gdb_prompt $" {"\r
-1\t  test\\(\\)\r
-2\t    foo\\(\\)\r
-3\t      bar\\(\\)\r
-4\t        bad\\(\\)\r" "\r
+1\tmain\\(\\)\r
+2\t  test\\(\\)\r
+3\t    foo\\(\\)\r
+4\t      bar\\(\\)\r
+5\t        bad\\(\\)\r" "\r
 \[0-9\]*\t  test\\(\\)"}
diff --git a/gdb/testsuite/gdb.btrace/instruction_history.exp b/gdb/testsuite/gdb.btrace/instruction_history.exp
index e7a0e8e..a49800c 100644
--- a/gdb/testsuite/gdb.btrace/instruction_history.exp
+++ b/gdb/testsuite/gdb.btrace/instruction_history.exp
@@ -56,42 +56,42 @@ gdb_test_multiple "info record" $testname {
     }
 }
 
-# we have exactly 6 instructions here
-set message "exactly 6 instructions"
-if { $traced != 6 } {
+# we have exactly 11 instructions here
+set message "exactly 11 instructions"
+if { $traced != 11 } {
     fail $message
 } else {
     pass $message
 }
 
 # test that we see the expected instructions
-gdb_test "record instruction-history 2,6" "
-2\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
-3\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tdec    %eax\r
-4\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tjmp    0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
-5\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tcmp    \\\$0x0,%eax\r
-6\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r"
-
-gdb_test "record instruction-history /f 2,+5" "
-2\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
-3\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tdec    %eax\r
-4\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tjmp    0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
-5\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tcmp    \\\$0x0,%eax\r
-6\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r"
-
-gdb_test "record instruction-history /p 6,-5" "
-2\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
-3\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tdec    %eax\r
-4\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tjmp    0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
-5\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tcmp    \\\$0x0,%eax\r
-6\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r"
-
-gdb_test "record instruction-history /pf 2,6" "
-2\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
-3\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tdec    %eax\r
-4\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tjmp    0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
-5\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tcmp    \\\$0x0,%eax\r
-6\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r"
+gdb_test "record instruction-history 3,7" "
+3\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
+4\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tdec    %eax\r
+5\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tjmp    0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
+6\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tcmp    \\\$0x0,%eax\r
+7\t   0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r"
+
+gdb_test "record instruction-history /f 3,+5" "
+3\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
+4\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tdec    %eax\r
+5\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tjmp    0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
+6\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tcmp    \\\$0x0,%eax\r
+7\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r"
+
+gdb_test "record instruction-history /p 7,-5" "
+3\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
+4\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tdec    %eax\r
+5\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tjmp    0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
+6\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tcmp    \\\$0x0,%eax\r
+7\t0x\[0-9a-f\]+ <loop\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r"
+
+gdb_test "record instruction-history /pf 3,7" "
+3\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
+4\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tdec    %eax\r
+5\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tjmp    0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r
+6\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tcmp    \\\$0x0,%eax\r
+7\t0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tje     0x\[0-9a-f\]+ <loop\\+\[0-9\]+>\r"
 
 # the following tests are checking the iterators
 # to avoid lots of regexps, we just check the number of lines that
@@ -134,7 +134,7 @@ if { $traced != $lines } {
 }
 
 # test that the iterator works
-set history_size 3
+set history_size 4
 gdb_test_no_output "set record instruction-history-size $history_size"
 set message "browse history forward start"
 set lines [test_lines_length "record instruction-history 1" $message]
@@ -144,8 +144,6 @@ if { $lines != $history_size } {
     pass $message
 }
 
-set history_size 2
-gdb_test_no_output "set record instruction-history-size $history_size"
 set message "browse history forward middle"
 set lines [test_lines_length "record instruction-history +" $message]
 if { $lines != $history_size } {
@@ -156,7 +154,7 @@ if { $lines != $history_size } {
 
 set message "browse history forward last"
 set lines [test_lines_length "record instruction-history +" $message]
-if { $lines != 1 } {
+if { $lines != 3 } {
     fail $message
 } else {
     pass $message
@@ -167,8 +165,6 @@ gdb_test "record instruction-history" "At the end of the branch trace record\\."
 # make sure we cannot move further
 gdb_test "record instruction-history" "At the end of the branch trace record\\." "browse history forward beyond 2"
 
-set history_size 3
-gdb_test_no_output "set record instruction-history-size $history_size"
 set message "browse history backward last"
 set lines [test_lines_length "record instruction-history -" $message]
 if { $lines != $history_size } {
@@ -177,8 +173,6 @@ if { $lines != $history_size } {
     pass $message
 }
 
-set history_size 2
-gdb_test_no_output "set record instruction-history-size $history_size"
 set message "browse history backward middle"
 set lines [test_lines_length "record instruction-history -" $message]
 if { $lines != $history_size } {
@@ -189,7 +183,7 @@ if { $lines != $history_size } {
 
 set message "browse history backward first"
 set lines [test_lines_length "record instruction-history -" $message]
-if { $lines != 1 } {
+if { $lines != 3 } {
     fail $message
 } else {
     pass $message
diff --git a/gdb/testsuite/gdb.btrace/record_goto.exp b/gdb/testsuite/gdb.btrace/record_goto.exp
index 8477a03..4d493df 100644
--- a/gdb/testsuite/gdb.btrace/record_goto.exp
+++ b/gdb/testsuite/gdb.btrace/record_goto.exp
@@ -39,76 +39,77 @@ gdb_test "next"
 
 # start by listing all functions
 gdb_test "record function-call-history /ci 1, +20" "
-1\t  fun4\tinst 1,3\r
-2\t    fun1\tinst 4,7\r
-3\t  fun4\tinst 8,8\r
-4\t    fun2\tinst 9,11\r
-5\t      fun1\tinst 12,15\r
-6\t    fun2\tinst 16,17\r
-7\t  fun4\tinst 18,18\r
-8\t    fun3\tinst 19,21\r
-9\t      fun1\tinst 22,25\r
-10\t    fun3\tinst 26,26\r
-11\t      fun2\tinst 27,29\r
-12\t        fun1\tinst 30,33\r
-13\t      fun2\tinst 34,35\r
-14\t    fun3\tinst 36,37\r
-15\t  fun4\tinst 38,39\r" "record_goto - list all functions"
+1\tmain\tinst 1,1\r
+2\t  fun4\tinst 2,4\r
+3\t    fun1\tinst 5,8\r
+4\t  fun4\tinst 9,9\r
+5\t    fun2\tinst 10,12\r
+6\t      fun1\tinst 13,16\r
+7\t    fun2\tinst 17,18\r
+8\t  fun4\tinst 19,19\r
+9\t    fun3\tinst 20,22\r
+10\t      fun1\tinst 23,26\r
+11\t    fun3\tinst 27,27\r
+12\t      fun2\tinst 28,30\r
+13\t        fun1\tinst 31,34\r
+14\t      fun2\tinst 35,36\r
+15\t    fun3\tinst 37,38\r
+16\t  fun4\tinst 39,40\r" "record_goto - list all functions"
 
 # let's see if we can go back in history
-gdb_test "record goto 18" "
-.*fun4 \\(\\) at record_goto.c:43.*" "record_goto - goto 18"
+gdb_test "record goto 19" "
+.*fun4 \\(\\) at record_goto.c:43.*" "record_goto - goto 19"
 
 # the function call history should start at the new location
 gdb_test "record function-call-history /ci" "
-7\t  fun4\tinst 18,18\r
-8\t    fun3\tinst 19,21\r
-9\t      fun1\tinst 22,25\r" "record_goto - function-call-history from 18 forwards"
+8\t  fun4\tinst 19,19\r
+9\t    fun3\tinst 20,22\r
+10\t      fun1\tinst 23,26\r" "record_goto - function-call-history from 19 forwards"
 
 # the instruciton history should start at the new location
 gdb_test "record instruction-history" "
-18.*\r
 19.*\r
-20.*\r" "record_goto - instruciton-history from 18 forwards"
+20.*\r
+21.*\r" "record_goto - instruciton-history from 19 forwards"
 
 # let's go to another place in the history
-gdb_test "record goto 26" "
-.*fun3 \\(\\) at record_goto.c:35.*" "record_goto - goto 26"
+gdb_test "record goto 27" "
+.*fun3 \\(\\) at record_goto.c:35.*" "record_goto - goto 27"
 
 # check the back trace at that location
 gdb_test "backtrace" "
 #0.*fun3.*at record_goto.c:35.*\r
 #1.*fun4.*at record_goto.c:44.*\r
-#2.*main.*at record_goto.c:51.*\r
-Backtrace stopped: not enough registers or memory available to unwind further" "backtrace at 25"
+#2.*main.*at record_goto.c:50.*\r
+Backtrace stopped: not enough registers or memory available to unwind further" "backtrace at 27"
 
 # walk the backtrace
 gdb_test "up" "
 .*fun4.*at record_goto.c:44.*" "up to fun4"
 gdb_test "up" "
-.*main.*at record_goto.c:51.*" "up to main"
+.*main.*at record_goto.c:50.*" "up to main"
 
 # the function call history should start at the new location
 gdb_test "record function-call-history /ci -" "
-8\t    fun3\tinst 19,21\r
-9\t      fun1\tinst 22,25\r
-10\t    fun3\tinst 26,26\r" "record_goto - function-call-history from 26 backwards"
+9\t    fun3\tinst 20,22\r
+10\t      fun1\tinst 23,26\r
+11\t    fun3\tinst 27,27\r" "record_goto - function-call-history from 27 backwards"
 
 # the instruciton history should start at the new location
 gdb_test "record instruction-history -" "
-24.*\r
 25.*\r
-26.*\r" "record_goto - instruciton-history from 26 backwards"
+26.*\r
+27.*\r" "record_goto - instruciton-history from 27 backwards"
 
 # test that we can go to the begin of the trace
 gdb_test "record goto begin" "
-.*fun4 \\(\\) at record_goto.c:40.*" "record_goto - goto begin"
+.*main \\(\\) at record_goto.c:49.*" "record_goto - goto begin"
 
 # check that we're filling up the context correctly
 gdb_test "record function-call-history /ci -" "
-1\t  fun4\tinst 1,3\r
-2\t    fun1\tinst 4,7\r
-3\t  fun4\tinst 8,8\r" "record_goto - function-call-history from begin backwards"
+1\tmain\tinst 1,1\r
+2\t  fun4\tinst 2,4\r
+3\t    fun1\tinst 5,8\r" "record_goto - function-call-history from begin backwards"
 
 # check that we're filling up the context correctly
 gdb_test "record instruction-history -" "
@@ -122,9 +123,9 @@ gdb_test "record goto 2" "
 
 # check that we're filling up the context correctly
 gdb_test "record function-call-history /ci -" "
-1\t  fun4\tinst 1,3\r
-2\t    fun1\tinst 4,7\r
-3\t  fun4\tinst 8,8\r" "record_goto - function-call-history from 2 backwards"
+1\tmain\tinst 1,1\r
+2\t  fun4\tinst 2,4\r
+3\t    fun1\tinst 5,8\r" "record_goto - function-call-history from 2 backwards"
 
 # check that we're filling up the context correctly
 gdb_test "record instruction-history -" "
@@ -138,28 +139,28 @@ gdb_test "record goto end" "
 
 # check that we're filling up the context correctly
 gdb_test "record function-call-history /ci" "
-13\t      fun2\tinst 34,35\r
-14\t    fun3\tinst 36,37\r
-15\t  fun4\tinst 38,39\r" "record_goto - function-call-history from end forwards"
+14\t      fun2\tinst 35,36\r
+15\t    fun3\tinst 37,38\r
+16\t  fun4\tinst 39,40\r" "record_goto - function-call-history from end forwards"
 
 # check that we're filling up the context correctly
 gdb_test "record instruction-history" "
-37.*\r
 38.*\r
-39.*\r" "record_goto - instruciton-history from end forwards"
+39.*\r
+40.*\r" "record_goto - instruciton-history from end forwards"
 
 # we should get the exact same history from the second to last instruction
-gdb_test "record goto 38" "
-.*fun4 \\(\\) at record_goto.c:44.*" "record_goto - goto 38"
+gdb_test "record goto 39" "
+.*fun4 \\(\\) at record_goto.c:44.*" "record_goto - goto 39"
 
 # check that we're filling up the context correctly
 gdb_test "record function-call-history /ci" "
-13\t      fun2\tinst 34,35\r
-14\t    fun3\tinst 36,37\r
-15\t  fun4\tinst 38,39\r" "record_goto - function-call-history from 38 forwards"
+14\t      fun2\tinst 35,36\r
+15\t    fun3\tinst 37,38\r
+16\t  fun4\tinst 39,40\r" "record_goto - function-call-history from 39 forwards"
 
 # check that we're filling up the context correctly
 gdb_test "record instruction-history" "
-37.*\r
 38.*\r
-39.*\r" "record_goto - instruciton-history from 38 forwards"
+39.*\r
+40.*\r" "record_goto - instruciton-history from 39 forwards"
diff --git a/gdb/testsuite/gdb.btrace/tailcall.exp b/gdb/testsuite/gdb.btrace/tailcall.exp
index ada4b14..5cadee0 100644
--- a/gdb/testsuite/gdb.btrace/tailcall.exp
+++ b/gdb/testsuite/gdb.btrace/tailcall.exp
@@ -38,18 +38,20 @@ gdb_test "next"
 
 # show the flat branch trace
 gdb_test "record function-call-history 1" "
-1\tfoo\r
-2\tbar\r
-3\tmain" "tailcall - flat"
+1\tmain\r
+2\tfoo\r
+3\tbar\r
+4\tmain" "tailcall - flat"
 
 # show the branch trace with calls indented
 gdb_test "record function-call-history /c 1" "
-1\t  foo\r
-2\t    bar\r
-3\tmain" "tailcall - calls indented"
+1\tmain\r
+2\t  foo\r
+3\t    bar\r
+4\tmain" "tailcall - calls indented"
 
 # go into bar
-gdb_test "record goto 3" "
+gdb_test "record goto 4" "
 .*bar \\(\\) at .*x86-tailcall.c:24.*" "go to bar"
 
 # check the backtrace
diff --git a/gdb/testsuite/gdb.btrace/unknown_functions.exp b/gdb/testsuite/gdb.btrace/unknown_functions.exp
index c7f33bf..a4707ce 100644
--- a/gdb/testsuite/gdb.btrace/unknown_functions.exp
+++ b/gdb/testsuite/gdb.btrace/unknown_functions.exp
@@ -41,18 +41,20 @@ gdb_continue_to_breakpoint "cont to test" ".*test.*"
 
 # show the flat branch trace
 gdb_test "record function-call-history 1" "
-1\t<unknown>\r
+1\ttest\r
 2\t<unknown>\r
 3\t<unknown>\r
-4\ttest\r
-5\tmain\r
-6\ttest" "unknown - flat"
+4\t<unknown>\r
+5\ttest\r
+6\tmain\r
+7\ttest" "unknown - flat"
 
 # show the branch trace with calls indented
 gdb_test "record function-call-history /c 1" "
-1\t    <unknown>\r
-2\t      <unknown>\r
-3\t    <unknown>\r
-4\t  test\r
-5\tmain\r
-6\t  test" "unknown - calls indented"
+1\t  test\r
+2\t    <unknown>\r
+3\t      <unknown>\r
+4\t    <unknown>\r
+5\t  test\r
+6\tmain\r
+7\t  test" "unknown - calls indented"
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 17/24] record-btrace: add record goto target methods
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
                   ` (18 preceding siblings ...)
  2013-07-03  9:15 ` [patch v4 01/24] gdbarch: add instruction predicate methods Markus Metzger
@ 2013-07-03  9:15 ` Markus Metzger
  2013-08-18 19:08   ` Jan Kratochvil
  2013-07-03  9:15 ` [patch v4 06/24] btrace: increase buffer size Markus Metzger
                   ` (4 subsequent siblings)
  24 siblings, 1 reply; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:15 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches, Christian Himpel

Reviewed-by: Eli Zaretskii  <eliz@gnu.org>
CC: Christian Himpel  <christian.himpel@intel.com>
2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>

	* record-btrace.c (record_btrace_set_replay,
	record_btrace_goto_begin, record_btrace_goto_end,
	record_btrace_goto): New.
	(init_record_btrace_ops): Initialize them.
	* NEWS: Announce it.

testsuite/
	* gdb.btrace/Makefile.in (EXECUTABLES): Add record_goto.
	* gdb.btrace/record_goto.c: New.
	* gdb.btrace/record_goto.exp: New.
	* gdb.btrace/x86-record_goto.S: New.


---
 gdb/NEWS                                   |    2 +
 gdb/record-btrace.c                        |   91 ++++++++
 gdb/testsuite/gdb.btrace/Makefile.in       |    2 +-
 gdb/testsuite/gdb.btrace/record_goto.c     |   51 +++++
 gdb/testsuite/gdb.btrace/record_goto.exp   |  152 +++++++++++++
 gdb/testsuite/gdb.btrace/x86-record_goto.S |  332 ++++++++++++++++++++++++++++
 6 files changed, 629 insertions(+), 1 deletions(-)
 create mode 100644 gdb/testsuite/gdb.btrace/record_goto.c
 create mode 100644 gdb/testsuite/gdb.btrace/record_goto.exp
 create mode 100644 gdb/testsuite/gdb.btrace/x86-record_goto.S

diff --git a/gdb/NEWS b/gdb/NEWS
index 6ac910a..bfe4dd4 100644
--- a/gdb/NEWS
+++ b/gdb/NEWS
@@ -13,6 +13,8 @@ Nios II ELF 			nios2*-*-elf
 Nios II GNU/Linux		nios2*-*-linux
 Texas Instruments MSP430	msp430*-*-elf
 
+* The btrace record target supports the 'record goto' command.
+
 * The command 'record function-call-history' supports a new modifier '/c' to
   indent the function names based on their call stack depth.
   The fields for the '/i' and '/l' modifier have been reordered.
diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
index 2b552d5..d6508bd 100644
--- a/gdb/record-btrace.c
+++ b/gdb/record-btrace.c
@@ -1023,6 +1023,94 @@ record_btrace_find_new_threads (struct target_ops *ops)
       }
 }
 
+/* Set the replay branch trace instruction iterator.  */
+
+static void
+record_btrace_set_replay (struct btrace_thread_info *btinfo,
+			  const struct btrace_insn_iterator *it)
+{
+  if (it == NULL || it->function == NULL)
+    {
+      if (btinfo->replay == NULL)
+	return;
+
+      xfree (btinfo->replay);
+      btinfo->replay = NULL;
+    }
+  else
+    {
+      if (btinfo->replay == NULL)
+	btinfo->replay = xzalloc (sizeof (*btinfo->replay));
+      else if (btrace_insn_cmp (btinfo->replay, it) == 0)
+	return;
+
+      *btinfo->replay = *it;
+    }
+
+  /* Clear the function call and instruction histories so we start anew
+     from the new replay position.  */
+  xfree (btinfo->insn_history);
+  xfree (btinfo->call_history);
+
+  btinfo->insn_history = NULL;
+  btinfo->call_history = NULL;
+
+  registers_changed ();
+  reinit_frame_cache ();
+  print_stack_frame (get_selected_frame (NULL), 1, SRC_AND_LOC);
+}
+
+/* The to_goto_record_begin method of target record-btrace.  */
+
+static void
+record_btrace_goto_begin (void)
+{
+  struct btrace_thread_info *btinfo;
+  struct btrace_insn_iterator begin;
+
+  btinfo = require_btrace ();
+
+  btrace_insn_begin (&begin, btinfo);
+  record_btrace_set_replay (btinfo, &begin);
+}
+
+/* The to_goto_record_end method of target record-btrace.  */
+
+static void
+record_btrace_goto_end (void)
+{
+  struct btrace_thread_info *btinfo;
+
+  btinfo = require_btrace ();
+
+  record_btrace_set_replay (btinfo, NULL);
+}
+
+/* The to_goto_record method of target record-btrace.  */
+
+static void
+record_btrace_goto (ULONGEST insn)
+{
+  struct btrace_thread_info *btinfo;
+  struct btrace_insn_iterator it;
+  unsigned int number;
+  int found;
+
+  number = (unsigned int) insn;
+
+  /* Check for wrap-arounds.  */
+  if (number != insn)
+    error (_("Instruction number out of range."));
+
+  btinfo = require_btrace ();
+
+  found = btrace_find_insn_by_number (&it, btinfo, number);
+  if (found == 0)
+    error (_("No such instruction."));
+
+  record_btrace_set_replay (btinfo, &it);
+}
+
 /* Initialize the record-btrace target ops.  */
 
 static void
@@ -1058,6 +1146,9 @@ init_record_btrace_ops (void)
   ops->to_resume = record_btrace_resume;
   ops->to_wait = record_btrace_wait;
   ops->to_find_new_threads = record_btrace_find_new_threads;
+  ops->to_goto_record_begin = record_btrace_goto_begin;
+  ops->to_goto_record_end = record_btrace_goto_end;
+  ops->to_goto_record = record_btrace_goto;
   ops->to_stratum = record_stratum;
   ops->to_magic = OPS_MAGIC;
 }
diff --git a/gdb/testsuite/gdb.btrace/Makefile.in b/gdb/testsuite/gdb.btrace/Makefile.in
index 5c70700..aa2820a 100644
--- a/gdb/testsuite/gdb.btrace/Makefile.in
+++ b/gdb/testsuite/gdb.btrace/Makefile.in
@@ -2,7 +2,7 @@ VPATH = @srcdir@
 srcdir = @srcdir@
 
 EXECUTABLES   = enable function_call_history instruction_history tailcall \
-  exception
+  exception record_goto
 
 MISCELLANEOUS =
 
diff --git a/gdb/testsuite/gdb.btrace/record_goto.c b/gdb/testsuite/gdb.btrace/record_goto.c
new file mode 100644
index 0000000..1250708
--- /dev/null
+++ b/gdb/testsuite/gdb.btrace/record_goto.c
@@ -0,0 +1,51 @@
+/* This testcase is part of GDB, the GNU debugger.
+
+   Copyright 2013 Free Software Foundation, Inc.
+
+   Contributed by Intel Corp. <markus.t.metzger@intel.com>
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+void
+fun1 (void)
+{
+}
+
+void
+fun2 (void)
+{
+  fun1 ();
+}
+
+void
+fun3 (void)
+{
+  fun1 ();
+  fun2 ();
+}
+
+void
+fun4 (void)
+{
+  fun1 ();
+  fun2 ();
+  fun3 ();
+}
+
+int
+main (void)
+{
+  fun4 ();
+  return 0;
+}
diff --git a/gdb/testsuite/gdb.btrace/record_goto.exp b/gdb/testsuite/gdb.btrace/record_goto.exp
new file mode 100644
index 0000000..a9f9a64
--- /dev/null
+++ b/gdb/testsuite/gdb.btrace/record_goto.exp
@@ -0,0 +1,152 @@
+# This testcase is part of GDB, the GNU debugger.
+#
+# Copyright 2013 Free Software Foundation, Inc.
+#
+# Contributed by Intel Corp. <markus.t.metzger@intel.com>
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# check for btrace support
+if { [skip_btrace_tests] } { return -1 }
+
+# start inferior
+standard_testfile x86-record_goto.S
+if [prepare_for_testing record_goto.exp $testfile $srcfile] {
+    return -1
+}
+if ![runto_main] {
+    return -1
+}
+
+# we want a small context sizes to simplify the test
+gdb_test_no_output "set record instruction-history-size 3"
+gdb_test_no_output "set record function-call-history-size 3"
+
+# trace the call to the test function
+gdb_test_no_output "record btrace"
+gdb_test "next"
+
+# start by listing all functions
+gdb_test "record function-call-history /ci 1, +20" "
+1\t  fun4\tinst 1,3\r
+2\t    fun1\tinst 4,7\r
+3\t  fun4\tinst 8,8\r
+4\t    fun2\tinst 9,11\r
+5\t      fun1\tinst 12,15\r
+6\t    fun2\tinst 16,17\r
+7\t  fun4\tinst 18,18\r
+8\t    fun3\tinst 19,21\r
+9\t      fun1\tinst 22,25\r
+10\t    fun3\tinst 26,26\r
+11\t      fun2\tinst 27,29\r
+12\t        fun1\tinst 30,33\r
+13\t      fun2\tinst 34,35\r
+14\t    fun3\tinst 36,37\r
+15\t  fun4\tinst 38,39\r" "record_goto - list all functions"
+
+# let's see if we can go back in history
+gdb_test "record goto 18" "
+.*fun4 \\(\\) at record_goto.c:43.*" "record_goto - goto 18"
+
+# the function call history should start at the new location
+gdb_test "record function-call-history /ci" "
+7\t  fun4\tinst 18,18\r
+8\t    fun3\tinst 19,21\r
+9\t      fun1\tinst 22,25\r" "record_goto - function-call-history from 18 forwards"
+
+# the instruciton history should start at the new location
+gdb_test "record instruction-history" "
+18.*\r
+19.*\r
+20.*\r" "record_goto - instruciton-history from 18 forwards"
+
+# let's go to another place in the history
+gdb_test "record goto 26" "
+.*fun3 \\(\\) at record_goto.c:35.*" "record_goto - goto 26"
+
+# the function call history should start at the new location
+gdb_test "record function-call-history /ci -" "
+8\t    fun3\tinst 19,21\r
+9\t      fun1\tinst 22,25\r
+10\t    fun3\tinst 26,26\r" "record_goto - function-call-history from 26 backwards"
+
+# the instruciton history should start at the new location
+gdb_test "record instruction-history -" "
+24.*\r
+25.*\r
+26.*\r" "record_goto - instruciton-history from 26 backwards"
+
+# test that we can go to the begin of the trace
+gdb_test "record goto begin" "
+.*fun4 \\(\\) at record_goto.c:40.*" "record_goto - goto begin"
+
+# check that we're filling up the context correctly
+gdb_test "record function-call-history /ci -" "
+1\t  fun4\tinst 1,3\r
+2\t    fun1\tinst 4,7\r
+3\t  fun4\tinst 8,8\r" "record_goto - function-call-history from begin backwards"
+
+# check that we're filling up the context correctly
+gdb_test "record instruction-history -" "
+1.*\r
+2.*\r
+3.*\r" "record_goto - instruciton-history from begin backwards"
+
+# we should get the exact same history from the first instruction
+gdb_test "record goto 2" "
+.*fun4 \\(\\) at record_goto.c:40.*" "record_goto - goto 2"
+
+# check that we're filling up the context correctly
+gdb_test "record function-call-history /ci -" "
+1\t  fun4\tinst 1,3\r
+2\t    fun1\tinst 4,7\r
+3\t  fun4\tinst 8,8\r" "record_goto - function-call-history from 2 backwards"
+
+# check that we're filling up the context correctly
+gdb_test "record instruction-history -" "
+1.*\r
+2.*\r
+3.*\r" "record_goto - instruciton-history from 2 backwards"
+
+# check that we can go to the end of the trace
+gdb_test "record goto end" "
+.*main \\(\\) at record_goto.c:50.*" "record_goto - goto end"
+
+# check that we're filling up the context correctly
+gdb_test "record function-call-history /ci" "
+13\t      fun2\tinst 34,35\r
+14\t    fun3\tinst 36,37\r
+15\t  fun4\tinst 38,39\r" "record_goto - function-call-history from end forwards"
+
+# check that we're filling up the context correctly
+gdb_test "record instruction-history" "
+37.*\r
+38.*\r
+39.*\r" "record_goto - instruciton-history from end forwards"
+
+# we should get the exact same history from the second to last instruction
+gdb_test "record goto 38" "
+.*fun4 \\(\\) at record_goto.c:44.*" "record_goto - goto 38"
+
+# check that we're filling up the context correctly
+gdb_test "record function-call-history /ci" "
+13\t      fun2\tinst 34,35\r
+14\t    fun3\tinst 36,37\r
+15\t  fun4\tinst 38,39\r" "record_goto - function-call-history from 38 forwards"
+
+# check that we're filling up the context correctly
+gdb_test "record instruction-history" "
+37.*\r
+38.*\r
+39.*\r" "record_goto - instruciton-history from 38 forwards"
diff --git a/gdb/testsuite/gdb.btrace/x86-record_goto.S b/gdb/testsuite/gdb.btrace/x86-record_goto.S
new file mode 100644
index 0000000..d2e6621
--- /dev/null
+++ b/gdb/testsuite/gdb.btrace/x86-record_goto.S
@@ -0,0 +1,332 @@
+/* This testcase is part of GDB, the GNU debugger.
+
+   Copyright 2013 Free Software Foundation, Inc.
+
+   Contributed by Intel Corp. <markus.t.metzger@intel.com>
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+
+   This file has been generated using:
+   gcc -S -g record_goto.c -o x86-record_goto.S  */
+
+	.file	"record_goto.c"
+	.section	.debug_abbrev,"",@progbits
+.Ldebug_abbrev0:
+	.section	.debug_info,"",@progbits
+.Ldebug_info0:
+	.section	.debug_line,"",@progbits
+.Ldebug_line0:
+	.text
+.Ltext0:
+.globl fun1
+	.type	fun1, @function
+fun1:
+.LFB0:
+	.file 1 "record_goto.c"
+	.loc 1 22 0
+	.cfi_startproc
+	pushq	%rbp
+	.cfi_def_cfa_offset 16
+	movq	%rsp, %rbp
+	.cfi_offset 6, -16
+	.cfi_def_cfa_register 6
+	.loc 1 23 0
+	leave
+	.cfi_def_cfa 7, 8
+	ret
+	.cfi_endproc
+.LFE0:
+	.size	fun1, .-fun1
+.globl fun2
+	.type	fun2, @function
+fun2:
+.LFB1:
+	.loc 1 27 0
+	.cfi_startproc
+	pushq	%rbp
+	.cfi_def_cfa_offset 16
+	movq	%rsp, %rbp
+	.cfi_offset 6, -16
+	.cfi_def_cfa_register 6
+	.loc 1 28 0
+	call	fun1
+	.loc 1 29 0
+	leave
+	.cfi_def_cfa 7, 8
+	ret
+	.cfi_endproc
+.LFE1:
+	.size	fun2, .-fun2
+.globl fun3
+	.type	fun3, @function
+fun3:
+.LFB2:
+	.loc 1 33 0
+	.cfi_startproc
+	pushq	%rbp
+	.cfi_def_cfa_offset 16
+	movq	%rsp, %rbp
+	.cfi_offset 6, -16
+	.cfi_def_cfa_register 6
+	.loc 1 34 0
+	call	fun1
+	.loc 1 35 0
+	call	fun2
+	.loc 1 36 0
+	leave
+	.cfi_def_cfa 7, 8
+	ret
+	.cfi_endproc
+.LFE2:
+	.size	fun3, .-fun3
+.globl fun4
+	.type	fun4, @function
+fun4:
+.LFB3:
+	.loc 1 40 0
+	.cfi_startproc
+	pushq	%rbp
+	.cfi_def_cfa_offset 16
+	movq	%rsp, %rbp
+	.cfi_offset 6, -16
+	.cfi_def_cfa_register 6
+	.loc 1 41 0
+	call	fun1
+	.loc 1 42 0
+	call	fun2
+	.loc 1 43 0
+	call	fun3
+	.loc 1 44 0
+	leave
+	.cfi_def_cfa 7, 8
+	ret
+	.cfi_endproc
+.LFE3:
+	.size	fun4, .-fun4
+.globl main
+	.type	main, @function
+main:
+.LFB4:
+	.loc 1 48 0
+	.cfi_startproc
+	pushq	%rbp
+	.cfi_def_cfa_offset 16
+	movq	%rsp, %rbp
+	.cfi_offset 6, -16
+	.cfi_def_cfa_register 6
+	.loc 1 49 0
+	call	fun4
+	.loc 1 50 0
+	movl	$0, %eax
+	.loc 1 51 0
+	leave
+	.cfi_def_cfa 7, 8
+	ret
+	.cfi_endproc
+.LFE4:
+	.size	main, .-main
+.Letext0:
+	.section	.debug_info
+	.long	0xbc
+	.value	0x3
+	.long	.Ldebug_abbrev0
+	.byte	0x8
+	.uleb128 0x1
+	.long	.LASF4
+	.byte	0x1
+	.long	.LASF5
+	.long	.LASF6
+	.quad	.Ltext0
+	.quad	.Letext0
+	.long	.Ldebug_line0
+	.uleb128 0x2
+	.byte	0x1
+	.long	.LASF0
+	.byte	0x1
+	.byte	0x15
+	.byte	0x1
+	.quad	.LFB0
+	.quad	.LFE0
+	.byte	0x1
+	.byte	0x9c
+	.uleb128 0x2
+	.byte	0x1
+	.long	.LASF1
+	.byte	0x1
+	.byte	0x1a
+	.byte	0x1
+	.quad	.LFB1
+	.quad	.LFE1
+	.byte	0x1
+	.byte	0x9c
+	.uleb128 0x2
+	.byte	0x1
+	.long	.LASF2
+	.byte	0x1
+	.byte	0x20
+	.byte	0x1
+	.quad	.LFB2
+	.quad	.LFE2
+	.byte	0x1
+	.byte	0x9c
+	.uleb128 0x2
+	.byte	0x1
+	.long	.LASF3
+	.byte	0x1
+	.byte	0x27
+	.byte	0x1
+	.quad	.LFB3
+	.quad	.LFE3
+	.byte	0x1
+	.byte	0x9c
+	.uleb128 0x3
+	.byte	0x1
+	.long	.LASF7
+	.byte	0x1
+	.byte	0x2f
+	.byte	0x1
+	.long	0xb8
+	.quad	.LFB4
+	.quad	.LFE4
+	.byte	0x1
+	.byte	0x9c
+	.uleb128 0x4
+	.byte	0x4
+	.byte	0x5
+	.string	"int"
+	.byte	0x0
+	.section	.debug_abbrev
+	.uleb128 0x1
+	.uleb128 0x11
+	.byte	0x1
+	.uleb128 0x25
+	.uleb128 0xe
+	.uleb128 0x13
+	.uleb128 0xb
+	.uleb128 0x3
+	.uleb128 0xe
+	.uleb128 0x1b
+	.uleb128 0xe
+	.uleb128 0x11
+	.uleb128 0x1
+	.uleb128 0x12
+	.uleb128 0x1
+	.uleb128 0x10
+	.uleb128 0x6
+	.byte	0x0
+	.byte	0x0
+	.uleb128 0x2
+	.uleb128 0x2e
+	.byte	0x0
+	.uleb128 0x3f
+	.uleb128 0xc
+	.uleb128 0x3
+	.uleb128 0xe
+	.uleb128 0x3a
+	.uleb128 0xb
+	.uleb128 0x3b
+	.uleb128 0xb
+	.uleb128 0x27
+	.uleb128 0xc
+	.uleb128 0x11
+	.uleb128 0x1
+	.uleb128 0x12
+	.uleb128 0x1
+	.uleb128 0x40
+	.uleb128 0xa
+	.byte	0x0
+	.byte	0x0
+	.uleb128 0x3
+	.uleb128 0x2e
+	.byte	0x0
+	.uleb128 0x3f
+	.uleb128 0xc
+	.uleb128 0x3
+	.uleb128 0xe
+	.uleb128 0x3a
+	.uleb128 0xb
+	.uleb128 0x3b
+	.uleb128 0xb
+	.uleb128 0x27
+	.uleb128 0xc
+	.uleb128 0x49
+	.uleb128 0x13
+	.uleb128 0x11
+	.uleb128 0x1
+	.uleb128 0x12
+	.uleb128 0x1
+	.uleb128 0x40
+	.uleb128 0xa
+	.byte	0x0
+	.byte	0x0
+	.uleb128 0x4
+	.uleb128 0x24
+	.byte	0x0
+	.uleb128 0xb
+	.uleb128 0xb
+	.uleb128 0x3e
+	.uleb128 0xb
+	.uleb128 0x3
+	.uleb128 0x8
+	.byte	0x0
+	.byte	0x0
+	.byte	0x0
+	.section	.debug_pubnames,"",@progbits
+	.long	0x3b
+	.value	0x2
+	.long	.Ldebug_info0
+	.long	0xc0
+	.long	0x2d
+	.string	"fun1"
+	.long	0x48
+	.string	"fun2"
+	.long	0x63
+	.string	"fun3"
+	.long	0x7e
+	.string	"fun4"
+	.long	0x99
+	.string	"main"
+	.long	0x0
+	.section	.debug_aranges,"",@progbits
+	.long	0x2c
+	.value	0x2
+	.long	.Ldebug_info0
+	.byte	0x8
+	.byte	0x0
+	.value	0x0
+	.value	0x0
+	.quad	.Ltext0
+	.quad	.Letext0-.Ltext0
+	.quad	0x0
+	.quad	0x0
+	.section	.debug_str,"MS",@progbits,1
+.LASF3:
+	.string	"fun4"
+.LASF5:
+	.string	"record_goto.c"
+.LASF4:
+	.string	"GNU C 4.4.4 20100726 (Red Hat 4.4.4-13)"
+.LASF7:
+	.string	"main"
+.LASF1:
+	.string	"fun2"
+.LASF0:
+	.string	"fun1"
+.LASF6:
+	.string	"/users/mmetzger/gdb/gerrit/git/gdb/testsuite/gdb.btrace"
+.LASF2:
+	.string	"fun3"
+	.ident	"GCC: (GNU) 4.4.4 20100726 (Red Hat 4.4.4-13)"
+	.section	.note.GNU-stack,"",@progbits
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 00/24] record-btrace: reverse
@ 2013-07-03  9:15 Markus Metzger
  2013-07-03  9:14 ` [patch v4 05/24] record-btrace: start counting at one Markus Metzger
                   ` (24 more replies)
  0 siblings, 25 replies; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:15 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches

This addresses fails reported by Jan on v3.  I also merged in a few
fixes for bugs I found.

This patch series adds support for the "record goto" command and for
reverse execution to the btrace record target.

Since btrace only records the control flow, reverse execution is limited
to modifying the PC register.  It does not support evaluating variables.
We do support the "backtrace" command, though.  The back trace is computed
from the control-flow trace rather than by unwinding stack frames.

There are a number of opens wrt unwinding and reverse execution.  The
opens are listed in the commit messages of the respective patches towards
the end of the series.  This makes the series more an RFC than a real
PATCH series, I suppose.

It also changes the existing "record function-call-history" and "record
instruction-history" commands slightly and fixes PR/15240.  The "record
function-call-history" command can now show the call relationship like this:

(gdb) record function-call-history /cli
12          fib inst 101,111    at src/fib.c:3,7
13            fib       inst 112,124    at src/fib.c:3,8
14          fib inst 125,129    at src/fib.c:7
15            fib       inst 130,142    at src/fib.c:3,8
16          fib inst 143,147    at src/fib.c:7,8
17        fib   inst 148,152    at src/fib.c:7,8
18      fib     inst 153,157    at src/fib.c:7
19        fib   inst 158,168    at src/fib.c:3,7
20          fib inst 169,179    at src/fib.c:3,7
21            fib       inst 180,185    at src/fib.c:3,4


Markus Metzger (24):
  gdbarch: add instruction predicate methods
  record: upcase record_print_flag enumeration constants
  btrace: change branch trace data structure
  record-btrace: fix insn range in function call history
  record-btrace: start counting at one
  btrace: increase buffer size
  record-btrace: optionally indent function call history
  record-btrace: make ranges include begin and end
  btrace: add replay position to btrace thread info
  target: add ops parameter to to_prepare_to_store method
  record-btrace: supply register target methods
  frame, backtrace: allow targets to supply a frame unwinder
  record-btrace, frame: supply target-specific unwinder
  record-btrace: provide xfer_partial target method
  record-btrace: add to_wait and to_resume target methods.
  record-btrace: provide target_find_new_threads method
  record-btrace: add record goto target methods
  record-btrace: extend unwinder
  btrace, linux: fix memory leak when reading branch trace
  btrace, gdbserver: read branch trace incrementally
  record-btrace: show trace from enable location
  infrun: reverse stepping from unknown functions
  record-btrace: add (reverse-)stepping support
  record-btrace: skip tail calls in back trace

 gdb/NEWS                                           |   18 +
 gdb/amd64-tdep.c                                   |   67 +
 gdb/arch-utils.c                                   |   15 +
 gdb/arch-utils.h                                   |    4 +
 gdb/btrace.c                                       | 1373 +++++++++++++++++---
 gdb/btrace.h                                       |  262 ++++-
 gdb/common/btrace-common.h                         |    6 +-
 gdb/common/linux-btrace.c                          |  110 ++-
 gdb/common/linux-btrace.h                          |    5 +-
 gdb/doc/gdb.texinfo                                |   30 +-
 gdb/dwarf2-frame.c                                 |    8 +-
 gdb/frame-unwind.c                                 |   80 +-
 gdb/frame.c                                        |   47 +-
 gdb/frame.h                                        |    8 +-
 gdb/gdbarch.c                                      |  105 ++
 gdb/gdbarch.h                                      |   24 +
 gdb/gdbarch.sh                                     |    9 +
 gdb/gdbserver/linux-low.c                          |   18 +-
 gdb/gdbserver/server.c                             |   11 +-
 gdb/gdbserver/target.h                             |    6 +-
 gdb/i386-tdep.c                                    |   59 +
 gdb/inf-child.c                                    |    2 +-
 gdb/infrun.c                                       |    2 +-
 gdb/monitor.c                                      |    2 +-
 gdb/ravenscar-thread.c                             |    7 +-
 gdb/record-btrace.c                                | 1375 +++++++++++++++++---
 gdb/record-full.c                                  |    3 +-
 gdb/record.c                                       |    8 +-
 gdb/record.h                                       |    7 +-
 gdb/remote-m32r-sdi.c                              |    2 +-
 gdb/remote-mips.c                                  |    5 +-
 gdb/remote.c                                       |   28 +-
 gdb/target.c                                       |   51 +-
 gdb/target.h                                       |   26 +-
 gdb/testsuite/gdb.btrace/Makefile.in               |    3 +-
 gdb/testsuite/gdb.btrace/delta.exp                 |   76 ++
 gdb/testsuite/gdb.btrace/exception.cc              |   56 +
 gdb/testsuite/gdb.btrace/exception.exp             |   67 +
 gdb/testsuite/gdb.btrace/finish.exp                |   70 +
 gdb/testsuite/gdb.btrace/function_call_history.exp |  328 +++--
 gdb/testsuite/gdb.btrace/instruction_history.exp   |   72 +-
 gdb/testsuite/gdb.btrace/multi-thread-step.c       |   53 +
 gdb/testsuite/gdb.btrace/multi-thread-step.exp     |   84 ++
 gdb/testsuite/gdb.btrace/next.exp                  |   89 ++
 gdb/testsuite/gdb.btrace/nexti.exp                 |   89 ++
 gdb/testsuite/gdb.btrace/record_goto.c             |   51 +
 gdb/testsuite/gdb.btrace/record_goto.exp           |  166 +++
 gdb/testsuite/gdb.btrace/rn-dl-bind.c              |   37 +
 gdb/testsuite/gdb.btrace/rn-dl-bind.exp            |   48 +
 gdb/testsuite/gdb.btrace/step.exp                  |  113 ++
 gdb/testsuite/gdb.btrace/stepi.exp                 |  114 ++
 gdb/testsuite/gdb.btrace/tailcall.exp              |   85 ++
 gdb/testsuite/gdb.btrace/unknown_functions.c       |   45 +
 gdb/testsuite/gdb.btrace/unknown_functions.exp     |   60 +
 gdb/testsuite/gdb.btrace/x86-record_goto.S         |  332 +++++
 gdb/testsuite/gdb.btrace/x86-tailcall.S            |  269 ++++
 gdb/testsuite/gdb.btrace/x86-tailcall.c            |   39 +
 57 files changed, 5420 insertions(+), 709 deletions(-)
 create mode 100644 gdb/testsuite/gdb.btrace/delta.exp
 create mode 100644 gdb/testsuite/gdb.btrace/exception.cc
 create mode 100755 gdb/testsuite/gdb.btrace/exception.exp
 create mode 100644 gdb/testsuite/gdb.btrace/finish.exp
 create mode 100644 gdb/testsuite/gdb.btrace/multi-thread-step.c
 create mode 100644 gdb/testsuite/gdb.btrace/multi-thread-step.exp
 create mode 100644 gdb/testsuite/gdb.btrace/next.exp
 create mode 100644 gdb/testsuite/gdb.btrace/nexti.exp
 create mode 100644 gdb/testsuite/gdb.btrace/record_goto.c
 create mode 100644 gdb/testsuite/gdb.btrace/record_goto.exp
 create mode 100644 gdb/testsuite/gdb.btrace/rn-dl-bind.c
 create mode 100644 gdb/testsuite/gdb.btrace/rn-dl-bind.exp
 create mode 100644 gdb/testsuite/gdb.btrace/step.exp
 create mode 100644 gdb/testsuite/gdb.btrace/stepi.exp
 create mode 100644 gdb/testsuite/gdb.btrace/tailcall.exp
 create mode 100644 gdb/testsuite/gdb.btrace/unknown_functions.c
 create mode 100644 gdb/testsuite/gdb.btrace/unknown_functions.exp
 create mode 100644 gdb/testsuite/gdb.btrace/x86-record_goto.S
 create mode 100644 gdb/testsuite/gdb.btrace/x86-tailcall.S
 create mode 100644 gdb/testsuite/gdb.btrace/x86-tailcall.c

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 12/24] frame, backtrace: allow targets to supply a frame unwinder
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
                   ` (16 preceding siblings ...)
  2013-07-03  9:15 ` [patch v4 23/24] record-btrace: add (reverse-)stepping support Markus Metzger
@ 2013-07-03  9:15 ` Markus Metzger
  2013-08-18 19:14   ` Jan Kratochvil
  2013-07-03  9:15 ` [patch v4 01/24] gdbarch: add instruction predicate methods Markus Metzger
                   ` (6 subsequent siblings)
  24 siblings, 1 reply; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:15 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches

Allow targets to supply an own target-specific frame unwinder.  If a
target-specific unwinder is supplied, it will be chosen before any other
unwinder.

gdb/
2013-02-11  Jan Kratochvil  <jan.kratochvil@redhat.com>

        * dwarf2-frame.c (dwarf2_frame_cfa): Move UNWIND_UNAVAILABLE check
        earlier.
        * frame-unwind.c: Include target.h.
        (frame_unwind_try_unwinder): New function with code from ...
        (frame_unwind_find_by_frame): ... here.  New variable
        unwinder_from_target, call also target_get_unwinder and
        frame_unwind_try_unwinder for it.
        * frame.c (get_frame_unwind_stop_reason): Unconditionally call
        get_prev_frame_1.
        * target.c (target_get_unwinder): New.
        * target.h (struct target_ops): New field to_get_unwinder.
        (target_get_unwinder): New declaration.


---
 gdb/dwarf2-frame.c |    8 ++--
 gdb/frame-unwind.c |   80 +++++++++++++++++++++++++++++++++------------------
 gdb/frame.c        |    9 ++----
 gdb/target.c       |   14 +++++++++
 gdb/target.h       |    7 ++++
 5 files changed, 80 insertions(+), 38 deletions(-)

diff --git a/gdb/dwarf2-frame.c b/gdb/dwarf2-frame.c
index 5c88b03..2aff23e 100644
--- a/gdb/dwarf2-frame.c
+++ b/gdb/dwarf2-frame.c
@@ -1497,16 +1497,16 @@ dwarf2_frame_cfa (struct frame_info *this_frame)
 {
   while (get_frame_type (this_frame) == INLINE_FRAME)
     this_frame = get_prev_frame (this_frame);
+  if (get_frame_unwind_stop_reason (this_frame) == UNWIND_UNAVAILABLE)
+    throw_error (NOT_AVAILABLE_ERROR,
+                _("can't compute CFA for this frame: "
+                  "required registers or memory are unavailable"));
   /* This restriction could be lifted if other unwinders are known to
      compute the frame base in a way compatible with the DWARF
      unwinder.  */
   if (!frame_unwinder_is (this_frame, &dwarf2_frame_unwind)
       && !frame_unwinder_is (this_frame, &dwarf2_tailcall_frame_unwind))
     error (_("can't compute CFA for this frame"));
-  if (get_frame_unwind_stop_reason (this_frame) == UNWIND_UNAVAILABLE)
-    throw_error (NOT_AVAILABLE_ERROR,
-		 _("can't compute CFA for this frame: "
-		   "required registers or memory are unavailable"));
   return get_frame_base (this_frame);
 }
 \f
diff --git a/gdb/frame-unwind.c b/gdb/frame-unwind.c
index b66febf..fe5f8fb 100644
--- a/gdb/frame-unwind.c
+++ b/gdb/frame-unwind.c
@@ -27,6 +27,7 @@
 #include "exceptions.h"
 #include "gdb_assert.h"
 #include "gdb_obstack.h"
+#include "target.h"
 
 static struct gdbarch_data *frame_unwind_data;
 
@@ -88,6 +89,48 @@ frame_unwind_append_unwinder (struct gdbarch *gdbarch,
   (*ip)->unwinder = unwinder;
 }
 
+/* Call SNIFFER from UNWINDER.  If it succeeded set UNWINDER for
+   THIS_FRAME and return 1.  Otherwise the function keeps THIS_FRAME
+   unchanged and returns 0.  */
+
+static int
+frame_unwind_try_unwinder (struct frame_info *this_frame, void **this_cache,
+                          const struct frame_unwind *unwinder)
+{
+  struct cleanup *old_cleanup;
+  volatile struct gdb_exception ex;
+  int res = 0;
+
+  old_cleanup = frame_prepare_for_sniffer (this_frame, unwinder);
+
+  TRY_CATCH (ex, RETURN_MASK_ERROR)
+    {
+      res = unwinder->sniffer (unwinder, this_frame, this_cache);
+    }
+  if (ex.reason < 0 && ex.error == NOT_AVAILABLE_ERROR)
+    {
+      /* This usually means that not even the PC is available,
+        thus most unwinders aren't able to determine if they're
+        the best fit.  Keep trying.  Fallback prologue unwinders
+        should always accept the frame.  */
+      do_cleanups (old_cleanup);
+      return 0;
+    }
+  else if (ex.reason < 0)
+    throw_exception (ex);
+  else if (res)
+    {
+      discard_cleanups (old_cleanup);
+      return 1;
+    }
+  else
+    {
+      do_cleanups (old_cleanup);
+      return 0;
+    }
+  gdb_assert_not_reached ("frame_unwind_try_unwinder");
+}
+
 /* Iterate through sniffers for THIS_FRAME frame until one returns with an
    unwinder implementation.  THIS_FRAME->UNWIND must be NULL, it will get set
    by this function.  Possibly initialize THIS_CACHE.  */
@@ -98,37 +141,18 @@ frame_unwind_find_by_frame (struct frame_info *this_frame, void **this_cache)
   struct gdbarch *gdbarch = get_frame_arch (this_frame);
   struct frame_unwind_table *table = gdbarch_data (gdbarch, frame_unwind_data);
   struct frame_unwind_table_entry *entry;
+  const struct frame_unwind *unwinder_from_target;
+
+  unwinder_from_target = target_get_unwinder ();
+  if (unwinder_from_target != NULL
+      && frame_unwind_try_unwinder (this_frame, this_cache,
+                                   unwinder_from_target))
+    return;
 
   for (entry = table->list; entry != NULL; entry = entry->next)
-    {
-      struct cleanup *old_cleanup;
-      volatile struct gdb_exception ex;
-      int res = 0;
-
-      old_cleanup = frame_prepare_for_sniffer (this_frame, entry->unwinder);
-
-      TRY_CATCH (ex, RETURN_MASK_ERROR)
-	{
-	  res = entry->unwinder->sniffer (entry->unwinder, this_frame,
-					  this_cache);
-	}
-      if (ex.reason < 0 && ex.error == NOT_AVAILABLE_ERROR)
-	{
-	  /* This usually means that not even the PC is available,
-	     thus most unwinders aren't able to determine if they're
-	     the best fit.  Keep trying.  Fallback prologue unwinders
-	     should always accept the frame.  */
-	}
-      else if (ex.reason < 0)
-	throw_exception (ex);
-      else if (res)
-        {
-          discard_cleanups (old_cleanup);
-          return;
-        }
+    if (frame_unwind_try_unwinder (this_frame, this_cache, entry->unwinder))
+      return;
 
-      do_cleanups (old_cleanup);
-    }
   internal_error (__FILE__, __LINE__, _("frame_unwind_find_by_frame failed"));
 }
 
diff --git a/gdb/frame.c b/gdb/frame.c
index d52c26a..5c080eb 100644
--- a/gdb/frame.c
+++ b/gdb/frame.c
@@ -2426,13 +2426,10 @@ get_frame_sp (struct frame_info *this_frame)
 enum unwind_stop_reason
 get_frame_unwind_stop_reason (struct frame_info *frame)
 {
-  /* If we haven't tried to unwind past this point yet, then assume
-     that unwinding would succeed.  */
-  if (frame->prev_p == 0)
-    return UNWIND_NO_REASON;
+  /* Fill-in STOP_REASON.  */
+  get_prev_frame_1 (frame);
+  gdb_assert (frame->prev_p);
 
-  /* Otherwise, we set a reason when we succeeded (or failed) to
-     unwind.  */
   return frame->stop_reason;
 }
 
diff --git a/gdb/target.c b/gdb/target.c
index ecffc9c..58388f3 100644
--- a/gdb/target.c
+++ b/gdb/target.c
@@ -4500,6 +4500,20 @@ target_call_history_range (ULONGEST begin, ULONGEST end, int flags)
   tcomplain ();
 }
 
+/* See target.h.  */
+
+const struct frame_unwind *
+target_get_unwinder (void)
+{
+  struct target_ops *t;
+
+  for (t = current_target.beneath; t != NULL; t = t->beneath)
+    if (t->to_get_unwinder != NULL)
+      return t->to_get_unwinder;
+
+  return NULL;
+}
+
 static int
 deprecated_debug_xfer_memory (CORE_ADDR memaddr, bfd_byte *myaddr, int len,
 			      int write, struct mem_attrib *attrib,
diff --git a/gdb/target.h b/gdb/target.h
index e890999..632bf1d 100644
--- a/gdb/target.h
+++ b/gdb/target.h
@@ -945,6 +945,10 @@ struct target_ops
        non-empty annex.  */
     int (*to_augmented_libraries_svr4_read) (void);
 
+    /* This unwinder is tried before any other arch unwinders.  Use NULL if it
+       is not used.  */
+    const struct frame_unwind *to_get_unwinder;
+
     int to_magic;
     /* Need sub-structure for target machine related rather than comm related?
      */
@@ -1826,6 +1830,9 @@ extern char *target_fileio_read_stralloc (const char *filename);
 
 extern int target_core_of_thread (ptid_t ptid);
 
+/* See to_get_unwinder in struct target_ops.  */
+extern const struct frame_unwind *target_get_unwinder (void);
+
 /* Verify that the memory in the [MEMADDR, MEMADDR+SIZE) range matches
    the contents of [DATA,DATA+SIZE).  Returns 1 if there's a match, 0
    if there's a mismatch, and -1 if an error is encountered while
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 23/24] record-btrace: add (reverse-)stepping support
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
                   ` (15 preceding siblings ...)
  2013-07-03  9:15 ` [patch v4 18/24] record-btrace: extend unwinder Markus Metzger
@ 2013-07-03  9:15 ` Markus Metzger
  2013-08-18 19:09   ` Jan Kratochvil
  2013-07-03  9:15 ` [patch v4 12/24] frame, backtrace: allow targets to supply a frame unwinder Markus Metzger
                   ` (7 subsequent siblings)
  24 siblings, 1 reply; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:15 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches

There's an open regarding frame unwinding.  When I start stepping, the frame
cache will still be based on normal unwinding as will the frame cached in the
thread's stepping context.  This will prevent me from detecting that i stepped
into a subroutine.

To overcome that, I'm resetting the frame cache and setting the thread's
stepping cache based on the current frame - which is now computed using branch
tracing unwind.  I had to split get_current_frame to avoid checks that would
prevent me from doing this.

I also need to call registers_changed when I return from to_wait.  Otherwise,
the PC is not updated and the current location is shown incorrectly.  Not sure
whether this is intended or whether I'm unintentionally working around
something, here.

It looks like I don't need any special support for breakpoints.  Is there a
scenario where normal breakpoints won't work?

Non-stop mode is not working.  Do not allow record-btrace in non-stop mode.

Reviewed-by: Eli Zaretskii  <eliz@gnu.org>
2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>

	* btrace.h (btrace_thread_flag): New.
	(struct btrace_thread_info)<flags>: New.
	* frame.c (get_current_frame_nocheck): New.
	(get_current_frame): Call get_current_frame_nocheck.
	* frame.h (get_current_frame_nocheck): New.
	* record-btrace.c (record_btrace_resume_thread,
	record_btrace_find_thread_to_move, btrace_step_no_history,
	btrace_step_stopped, record_btrace_start_replaying,
	record_btrace_step_thread,
	record_btrace_find_resume_thread): New.
	(record_btrace_resume, record_btrace_wait): Extend.
	(record_btrace_can_execute_reverse): New.
	(record_btrace_open): Fail in non-stop mode.
	(record_btrace_set_replay): Split into this, ...
	(record_btrace_stop_replaying): ... this, ...
	(record_btrace_clear_histories): ... and this.
	(init_record_btrace_ops): Init to_can_execute_reverse.
	* NEWS: Announce it.

testsuite/
	* gdb.btrace/delta.exp: Check reverse stepi.
	* gdb.btrace/finish.exp: New.
	* gdb.btrace/next.exp: New.
	* gdb.btrace/nexti.exp: New.
	* gdb.btrace/record_goto.c: Add comments.
	* gdb.btrace/step.exp: New.
	* gdb.btrace/stepi.exp: New.
	* gdb.btrace/multi-thread-step.c: New.
	* gdb.btrace/multi-thread-step.exp: New.

doc/
	* gdb.texinfo: Document limited reverse/replay support
	for target record-btrace.


---
 gdb/NEWS                                       |    4 +
 gdb/btrace.h                                   |   22 ++
 gdb/doc/gdb.texinfo                            |    4 +-
 gdb/frame.c                                    |   38 ++-
 gdb/frame.h                                    |    4 +
 gdb/record-btrace.c                            |  366 ++++++++++++++++++++++--
 gdb/testsuite/gdb.btrace/delta.exp             |   13 +
 gdb/testsuite/gdb.btrace/finish.exp            |   70 +++++
 gdb/testsuite/gdb.btrace/multi-thread-step.c   |   53 ++++
 gdb/testsuite/gdb.btrace/multi-thread-step.exp |   84 ++++++
 gdb/testsuite/gdb.btrace/next.exp              |   89 ++++++
 gdb/testsuite/gdb.btrace/nexti.exp             |   89 ++++++
 gdb/testsuite/gdb.btrace/record_goto.c         |   36 ++--
 gdb/testsuite/gdb.btrace/step.exp              |  113 ++++++++
 gdb/testsuite/gdb.btrace/stepi.exp             |  114 ++++++++
 15 files changed, 1047 insertions(+), 52 deletions(-)
 create mode 100644 gdb/testsuite/gdb.btrace/finish.exp
 create mode 100644 gdb/testsuite/gdb.btrace/multi-thread-step.c
 create mode 100644 gdb/testsuite/gdb.btrace/multi-thread-step.exp
 create mode 100644 gdb/testsuite/gdb.btrace/next.exp
 create mode 100644 gdb/testsuite/gdb.btrace/nexti.exp
 create mode 100644 gdb/testsuite/gdb.btrace/step.exp
 create mode 100644 gdb/testsuite/gdb.btrace/stepi.exp

diff --git a/gdb/NEWS b/gdb/NEWS
index 433a968..b53033a 100644
--- a/gdb/NEWS
+++ b/gdb/NEWS
@@ -13,6 +13,10 @@ Nios II ELF 			nios2*-*-elf
 Nios II GNU/Linux		nios2*-*-linux
 Texas Instruments MSP430	msp430*-*-elf
 
+* The btrace record target supports limited replay and reverse
+  execution.  The target does not record data and does therefore
+  not allow reading memory or registers.
+
 * The btrace record target supports the 'record goto' command.
   For locations inside the execution trace, the back trace is computed
   based on the information stored in the execution trace.
diff --git a/gdb/btrace.h b/gdb/btrace.h
index 04466d3..22fabb5 100644
--- a/gdb/btrace.h
+++ b/gdb/btrace.h
@@ -149,6 +149,25 @@ struct btrace_call_history
   struct btrace_call_iterator end;
 };
 
+/* Branch trace thread flags.  */
+enum btrace_thread_flag
+  {
+    /* The thread is to be stepped forwards.  */
+    BTHR_STEP = (1 << 0),
+
+    /* The thread is to be stepped backwards.  */
+    BTHR_RSTEP = (1 << 1),
+
+    /* The thread is to be continued forwards.  */
+    BTHR_CONT = (1 << 2),
+
+    /* The thread is to be continued backwards.  */
+    BTHR_RCONT = (1 << 3),
+
+    /* The thread is to be moved.  */
+    BTHR_MOVE = (BTHR_STEP | BTHR_RSTEP | BTHR_CONT | BTHR_RCONT)
+  };
+
 /* Branch trace information per thread.
 
    This represents the branch trace configuration as well as the entry point
@@ -176,6 +195,9 @@ struct btrace_thread_info
      becomes zero.  */
   int level;
 
+  /* A bit-vector of btrace_thread_flag.  */
+  unsigned int flags;
+
   /* The instruction history iterator.  */
   struct btrace_insn_history *insn_history;
 
diff --git a/gdb/doc/gdb.texinfo b/gdb/doc/gdb.texinfo
index 2dc45bc..9ad5391 100644
--- a/gdb/doc/gdb.texinfo
+++ b/gdb/doc/gdb.texinfo
@@ -6192,8 +6192,8 @@ replay implementation.  This method allows replaying and reverse
 execution.
 
 @item btrace
-Hardware-supported instruction recording.  This method does not allow
-replaying and reverse execution.
+Hardware-supported instruction recording.  This method does not record
+data.  It allows limited replay and reverse execution.
 
 This recording method may not be available on all processors.
 @end table
diff --git a/gdb/frame.c b/gdb/frame.c
index 5c080eb..f2dbdb4 100644
--- a/gdb/frame.c
+++ b/gdb/frame.c
@@ -1367,6 +1367,29 @@ unwind_to_current_frame (struct ui_out *ui_out, void *args)
   return 0;
 }
 
+/* See frame.h.  */
+
+struct frame_info *get_current_frame_nocheck (void)
+{
+  if (current_frame == NULL)
+    {
+      struct frame_info *sentinel_frame =
+	create_sentinel_frame (current_program_space, get_current_regcache ());
+
+      if (catch_exceptions (current_uiout, unwind_to_current_frame,
+			    sentinel_frame, RETURN_MASK_ERROR) != 0)
+	{
+	  /* Oops! Fake a current frame?  Is this useful?  It has a PC
+             of zero, for instance.  */
+	  current_frame = sentinel_frame;
+	}
+    }
+
+  return current_frame;
+}
+
+/* See frame.h.  */
+
 struct frame_info *
 get_current_frame (void)
 {
@@ -1381,6 +1404,7 @@ get_current_frame (void)
     error (_("No stack."));
   if (!target_has_memory)
     error (_("No memory."));
+
   /* Traceframes are effectively a substitute for the live inferior.  */
   if (get_traceframe_number () < 0)
     {
@@ -1392,19 +1416,7 @@ get_current_frame (void)
 	error (_("Target is executing."));
     }
 
-  if (current_frame == NULL)
-    {
-      struct frame_info *sentinel_frame =
-	create_sentinel_frame (current_program_space, get_current_regcache ());
-      if (catch_exceptions (current_uiout, unwind_to_current_frame,
-			    sentinel_frame, RETURN_MASK_ERROR) != 0)
-	{
-	  /* Oops! Fake a current frame?  Is this useful?  It has a PC
-             of zero, for instance.  */
-	  current_frame = sentinel_frame;
-	}
-    }
-  return current_frame;
+  return get_current_frame_nocheck ();
 }
 
 /* The "selected" stack frame is used by default for local and arg
diff --git a/gdb/frame.h b/gdb/frame.h
index db4cc52..e3f004b 100644
--- a/gdb/frame.h
+++ b/gdb/frame.h
@@ -240,6 +240,10 @@ enum frame_type
    error.  */
 extern struct frame_info *get_current_frame (void);
 
+/* Similar to get_current_frame except that we omit all checks.  May
+   return NULL if unwinding fails.  */
+extern struct frame_info *get_current_frame_nocheck (void);
+
 /* Does the current target interface have enough state to be able to
    query the current inferior for frame info, and is the inferior in a
    state where that is possible?  */
diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
index 14dbcd2..b45a5fb 100644
--- a/gdb/record-btrace.c
+++ b/gdb/record-btrace.c
@@ -151,6 +151,10 @@ record_btrace_open (char *args, int from_tty)
   if (!target_supports_btrace ())
     error (_("Target does not support branch tracing."));
 
+  if (non_stop)
+    error (_("Record btrace can't debug inferior in non-stop mode "
+	     "(non-stop)."));
+
   gdb_assert (record_btrace_thread_observer == NULL);
 
   disable_chain = make_cleanup (null_cleanup, NULL);
@@ -1183,14 +1187,107 @@ static const struct frame_unwind record_btrace_frame_unwind =
   record_btrace_frame_dealloc_cache
 };
 
+/* Indicate that TP should be resumed according to FLAG.  */
+
+static void
+record_btrace_resume_thread (struct thread_info *tp,
+			     enum btrace_thread_flag flag)
+{
+  struct btrace_thread_info *btinfo;
+
+  DEBUG ("resuming %d (%s): %u", tp->num, target_pid_to_str (tp->ptid), flag);
+
+  btinfo = &tp->btrace;
+
+  if ((btinfo->flags & BTHR_MOVE) != 0)
+    error (_("Thread already moving."));
+
+  /* Fetch the latest branch trace.  */
+  btrace_fetch (tp);
+
+  btinfo->flags |= flag;
+}
+
+/* Find the thread to resume given a PTID.  */
+
+static struct thread_info *
+record_btrace_find_resume_thread (ptid_t ptid)
+{
+  struct thread_info *tp;
+
+  /* When asked to resume everything, we pick the current thread.  */
+  if (ptid_equal (minus_one_ptid, ptid) || ptid_is_pid (ptid))
+    ptid = inferior_ptid;
+
+  return find_thread_ptid (ptid);
+}
+
+/* Stop replaying a thread.  */
+
+static struct btrace_insn_iterator *
+record_btrace_start_replaying (struct btrace_thread_info *btinfo)
+{
+  struct btrace_insn_iterator *replay;
+  const struct btrace_insn *insn;
+  struct symtab_and_line sal;
+  struct frame_info *frame;
+
+  /* We can't start replaying without trace.  */
+  if (btinfo->begin == NULL)
+    return NULL;
+
+  /* We start replaying at the end of the branch trace.  This corresponds to the
+     current instruction.  */
+  replay = xzalloc (sizeof (*replay));
+  btrace_insn_end (replay, btinfo);
+
+  /* We're not replaying, yet.  */
+  gdb_assert (btinfo->replay == NULL);
+  btinfo->replay = replay;
+
+  /* Make sure we're not using any stale registers or frames.  */
+  registers_changed ();
+  reinit_frame_cache ();
+
+  /* We just started replaying.  The frame id cached for stepping is based
+     on unwinding, not on branch tracing.  Recompute it.  */
+  frame = get_current_frame_nocheck ();
+  insn = btrace_insn_get (replay);
+  sal = find_pc_line (insn->pc, 0);
+  set_step_info (frame, sal);
+
+  return replay;
+}
+
+/* Stop replaying a thread.  */
+
+static void
+record_btrace_stop_replaying (struct btrace_thread_info *btinfo)
+{
+  xfree (btinfo->replay);
+  btinfo->replay = NULL;
+}
+
 /* The to_resume method of target record-btrace.  */
 
 static void
 record_btrace_resume (struct target_ops *ops, ptid_t ptid, int step,
 		      enum gdb_signal signal)
 {
+  struct thread_info *tp, *other;
+  enum btrace_thread_flag flag;
+
+  DEBUG ("resume %s: %s", target_pid_to_str (ptid), step ? "step" : "cont");
+
+  tp = record_btrace_find_resume_thread (ptid);
+
+  /* Stop replaying other threads if the thread to resume is not replaying.  */
+  if (tp != NULL && !btrace_is_replaying (tp))
+    ALL_THREADS (other)
+      record_btrace_stop_replaying (&other->btrace);
+
   /* As long as we're not replaying, just forward the request.  */
-  if (!record_btrace_is_replaying ())
+  if (!record_btrace_is_replaying () && execution_direction != EXEC_REVERSE)
     {
       for (ops = ops->beneath; ops != NULL; ops = ops->beneath)
 	if (ops->to_resume != NULL)
@@ -1199,7 +1296,211 @@ record_btrace_resume (struct target_ops *ops, ptid_t ptid, int step,
       error (_("Cannot find target for stepping."));
     }
 
-  error (_("You can't do this from here.  Do 'record goto end', first."));
+  /* We can't pass signals when replaying.  */
+  if (signal != GDB_SIGNAL_0)
+    error (_("You can't resume with signal from here."));
+
+  /* Compute the btrace thread flag for the requested move.  */
+  if (step == 0)
+    flag = execution_direction == EXEC_REVERSE ? BTHR_RCONT : BTHR_CONT;
+  else
+    flag = execution_direction == EXEC_REVERSE ? BTHR_RSTEP : BTHR_STEP;
+
+  /* Find the thread to move.  */
+  if (ptid_equal (minus_one_ptid, ptid) || ptid_is_pid (ptid))
+    {
+      ALL_THREADS (tp)
+	record_btrace_resume_thread (tp, flag);
+    }
+  else if (tp == NULL)
+    error (_("Cannot find thread to resume."));
+  else
+    record_btrace_resume_thread (tp, flag);
+
+  /* We just indicate the resume intent here.  The actual stepping happens in
+     record_btrace_wait below.  */
+}
+
+/* Find a thread to move.  */
+
+static struct thread_info *
+record_btrace_find_thread_to_move (ptid_t ptid)
+{
+  struct thread_info *tp;
+
+  /* First check the parameter thread.  */
+  tp = find_thread_ptid (ptid);
+  if (tp != NULL && (tp->btrace.flags & BTHR_MOVE) != 0)
+    return tp;
+
+  /* Next check the current thread. */
+  tp = find_thread_ptid (inferior_ptid);
+  if (tp != NULL && (tp->btrace.flags & BTHR_MOVE) != 0)
+    return tp;
+
+  /* Otherwise, find one other thread that has been resumed.  */
+  ALL_THREADS (tp)
+    if ((tp->btrace.flags & BTHR_MOVE) != 0)
+      return tp;
+
+  return NULL;
+}
+
+/* Return a targetwait status indicating that we ran out of history.  */
+
+static struct target_waitstatus
+btrace_step_no_history (void)
+{
+  struct target_waitstatus status;
+
+  status.kind = TARGET_WAITKIND_NO_HISTORY;
+
+  return status;
+}
+
+/* Return a targetwait status indicating that we stopped.  */
+
+static struct target_waitstatus
+btrace_step_stopped (void)
+{
+  struct target_waitstatus status;
+
+  status.kind = TARGET_WAITKIND_STOPPED;
+  status.value.sig = GDB_SIGNAL_TRAP;
+
+  return status;
+}
+
+/* Clear the record histories.  */
+
+static void
+record_btrace_clear_histories (struct btrace_thread_info *btinfo)
+{
+  xfree (btinfo->insn_history);
+  xfree (btinfo->call_history);
+
+  btinfo->insn_history = NULL;
+  btinfo->call_history = NULL;
+}
+
+/* Step a single thread.  */
+
+static struct target_waitstatus
+record_btrace_step_thread (struct thread_info *tp)
+{
+  struct btrace_insn_iterator *replay, end;
+  struct btrace_thread_info *btinfo;
+  struct address_space *aspace;
+  unsigned int steps, flag;
+
+  btinfo = &tp->btrace;
+  replay = btinfo->replay;
+
+  flag = btinfo->flags & BTHR_MOVE;
+  btinfo->flags &= ~BTHR_MOVE;
+
+  DEBUG ("stepping %d (%s): %u", tp->num, target_pid_to_str (tp->ptid), flag);
+
+  switch (flag)
+    {
+    default:
+      internal_error (__FILE__, __LINE__, _("invalid stepping type."));
+
+    case BTHR_STEP:
+      /* We're done if we're not replaying.  */
+      if (replay == NULL)
+	return btrace_step_no_history ();
+
+      /* We are always able to step at least once.  */
+      steps = btrace_insn_next (replay, 1);
+      gdb_assert (steps == 1);
+
+      /* Determine the end of the instruction trace.  */
+      btrace_insn_end (&end, btinfo);
+
+      /* We stop replaying if we reached the end of the trace.  */
+      if (btrace_insn_cmp (replay, &end) == 0)
+	record_btrace_stop_replaying (btinfo);
+
+      return btrace_step_stopped ();
+
+    case BTHR_RSTEP:
+      /* Start replaying if we're not already doing so.  */
+      if (replay == NULL)
+	replay = record_btrace_start_replaying (btinfo);
+
+      /* If we can't step any further, we reached the end of the history.  */
+      steps = btrace_insn_prev (replay, 1);
+      if (steps == 0)
+	return btrace_step_no_history ();
+
+      return btrace_step_stopped ();
+
+    case BTHR_CONT:
+      /* We're done if we're not replaying.  */
+      if (replay == NULL)
+	return btrace_step_no_history ();
+
+      /* I'd much rather go from TP to its inferior, but how?  */
+      aspace = current_inferior ()->aspace;
+
+      /* Determine the end of the instruction trace.  */
+      btrace_insn_end (&end, btinfo);
+
+      for (;;)
+	{
+	  const struct btrace_insn *insn;
+
+	  /* We are always able to step at least once.  */
+	  steps = btrace_insn_next (replay, 1);
+	  gdb_assert (steps == 1);
+
+	  /* We stop replaying if we reached the end of the trace.  */
+	  if (btrace_insn_cmp (replay, &end) == 0)
+	    {
+	      record_btrace_stop_replaying (btinfo);
+	      return btrace_step_no_history ();
+	    }
+
+	  insn = btrace_insn_get (replay);
+	  gdb_assert (insn);
+
+	  DEBUG ("stepping %d (%s) ... %s", tp->num,
+		 target_pid_to_str (tp->ptid),
+		 core_addr_to_string_nz (insn->pc));
+
+	  if (breakpoint_here_p (aspace, insn->pc))
+	    return btrace_step_stopped ();
+	}
+
+    case BTHR_RCONT:
+      /* Start replaying if we're not already doing so.  */
+      if (replay == NULL)
+	replay = record_btrace_start_replaying (btinfo);
+
+      /* I'd much rather go from TP to its inferior, but how?  */
+      aspace = current_inferior ()->aspace;
+
+      for (;;)
+	{
+	  const struct btrace_insn *insn;
+
+	  /* If we can't step any further, we're done.  */
+	  steps = btrace_insn_prev (replay, 1);
+	  if (steps == 0)
+	    return btrace_step_no_history ();
+
+	  insn = btrace_insn_get (replay);
+	  gdb_assert (insn);
+
+	  DEBUG ("stepping %d (%s): reverse~ ... %s", tp->num,
+		 target_pid_to_str (tp->ptid),
+		 core_addr_to_string_nz (insn->pc));
+
+	  if (breakpoint_here_p (aspace, insn->pc))
+	    return btrace_step_stopped ();
+	}
+    }
 }
 
 /* The to_wait method of target record-btrace.  */
@@ -1208,8 +1509,12 @@ static ptid_t
 record_btrace_wait (struct target_ops *ops, ptid_t ptid,
 		    struct target_waitstatus *status, int options)
 {
+  struct thread_info *tp, *other;
+
+  DEBUG ("wait %s (0x%x)", target_pid_to_str (ptid), options);
+
   /* As long as we're not replaying, just forward the request.  */
-  if (!record_btrace_is_replaying ())
+  if (!record_btrace_is_replaying () && execution_direction != EXEC_REVERSE)
     {
       for (ops = ops->beneath; ops != NULL; ops = ops->beneath)
 	if (ops->to_wait != NULL)
@@ -1218,7 +1523,40 @@ record_btrace_wait (struct target_ops *ops, ptid_t ptid,
       error (_("Cannot find target for stepping."));
     }
 
-  error (_("You can't do this from here.  Do 'record goto end', first."));
+  /* Let's find a thread to move.  */
+  tp = record_btrace_find_thread_to_move (ptid);
+  if (tp == NULL)
+    {
+      DEBUG ("wait %s: no thread", target_pid_to_str (ptid));
+
+      status->kind = TARGET_WAITKIND_IGNORE;
+      return minus_one_ptid;
+    }
+
+  /* We only move a single thread.  We're not able to correlate threads.  */
+  *status = record_btrace_step_thread (tp);
+
+  /* Stop all other threads. */
+  if (!non_stop)
+    ALL_THREADS (other)
+      other->btrace.flags &= ~BTHR_MOVE;
+
+  /* Start record histories anew from the current position.  */
+  record_btrace_clear_histories (&tp->btrace);
+
+  /* GDB seems to need this.  Without, a stale PC seems to be used resulting in
+     the current location to be displayed incorrectly.  */
+  registers_changed ();
+
+  return tp->ptid;
+}
+
+/* The to_can_execute_reverse method of target record-btrace.  */
+
+static int
+record_btrace_can_execute_reverse (void)
+{
+  return 1;
 }
 
 /* The to_find_new_threads method of target record-btrace.  */
@@ -1246,30 +1584,19 @@ record_btrace_set_replay (struct btrace_thread_info *btinfo,
 			  const struct btrace_insn_iterator *it)
 {
   if (it == NULL || it->function == NULL)
-    {
-      if (btinfo->replay == NULL)
-	return;
-
-      xfree (btinfo->replay);
-      btinfo->replay = NULL;
-    }
+    record_btrace_stop_replaying (btinfo);
   else
     {
       if (btinfo->replay == NULL)
-	btinfo->replay = xzalloc (sizeof (*btinfo->replay));
+	record_btrace_start_replaying (btinfo);
       else if (btrace_insn_cmp (btinfo->replay, it) == 0)
 	return;
 
       *btinfo->replay = *it;
     }
 
-  /* Clear the function call and instruction histories so we start anew
-     from the new replay position.  */
-  xfree (btinfo->insn_history);
-  xfree (btinfo->call_history);
-
-  btinfo->insn_history = NULL;
-  btinfo->call_history = NULL;
+  /* Start anew from the new replay position.  */
+  record_btrace_clear_histories (btinfo);
 
   registers_changed ();
   reinit_frame_cache ();
@@ -1365,6 +1692,7 @@ init_record_btrace_ops (void)
   ops->to_goto_record_begin = record_btrace_goto_begin;
   ops->to_goto_record_end = record_btrace_goto_end;
   ops->to_goto_record = record_btrace_goto;
+  ops->to_can_execute_reverse = record_btrace_can_execute_reverse;
   ops->to_stratum = record_stratum;
   ops->to_magic = OPS_MAGIC;
 }
diff --git a/gdb/testsuite/gdb.btrace/delta.exp b/gdb/testsuite/gdb.btrace/delta.exp
index 9ee2629..49d151e 100644
--- a/gdb/testsuite/gdb.btrace/delta.exp
+++ b/gdb/testsuite/gdb.btrace/delta.exp
@@ -61,3 +61,16 @@ gdb_test "record instruction-history /f 1" "
 1\t   0x\[0-9a-f\]+ <\\+\[0-9\]+>:\tmov *\\\$0x0,%eax\r" "delta, 4.2"
 gdb_test "record function-call-history /c 1" "
 1\tmain\r" "delta, 4.3"
+
+# check that we can reverse-stepi that instruction
+gdb_test "reverse-stepi"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 1 instructions in 1 functions for .*\r
+Replay in progress\.  At instruction 1\." "delta, 5.1"
+
+# and back
+gdb_test "stepi"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 1 instructions in 1 functions for .*" "delta, 5.2"
diff --git a/gdb/testsuite/gdb.btrace/finish.exp b/gdb/testsuite/gdb.btrace/finish.exp
new file mode 100644
index 0000000..87ebfe1
--- /dev/null
+++ b/gdb/testsuite/gdb.btrace/finish.exp
@@ -0,0 +1,70 @@
+# This testcase is part of GDB, the GNU debugger.
+#
+# Copyright 2013 Free Software Foundation, Inc.
+#
+# Contributed by Intel Corp. <markus.t.metzger@intel.com>
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# check for btrace support
+if { [skip_btrace_tests] } { return -1 }
+
+# start inferior
+standard_testfile x86-record_goto.S
+if [prepare_for_testing finish.exp $testfile $srcfile] {
+    return -1
+}
+
+if ![runto_main] {
+    return -1
+}
+
+# trace the call to the test function
+gdb_test_no_output "record btrace"
+gdb_test "next"
+
+# let's go somewhere where we can finish
+gdb_test "record goto 32" ".*fun1\.1.*" "finish, 1.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 32\." "finish, 1.2"
+
+# let's finish into fun2
+gdb_test "finish" ".*fun2\.3.*" "finish, 2.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 35\." "finish, 2.2"
+
+# now let's reverse-finish into fun3
+gdb_test "reverse-finish" ".*fun3\.3.*" "finish, 3.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 27\." "finish, 3.2"
+
+# finish again - into fun4
+gdb_test "finish" ".*fun4\.5.*" "finish, 4.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 39\." "finish, 4.2"
+
+# and reverse-finish again - into main
+gdb_test "reverse-finish" ".*main\.2.*" "finish, 5.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 1\." "finish, 5.2"
diff --git a/gdb/testsuite/gdb.btrace/multi-thread-step.c b/gdb/testsuite/gdb.btrace/multi-thread-step.c
new file mode 100644
index 0000000..487565b
--- /dev/null
+++ b/gdb/testsuite/gdb.btrace/multi-thread-step.c
@@ -0,0 +1,53 @@
+/* This testcase is part of GDB, the GNU debugger.
+
+   Copyright 2013 Free Software Foundation, Inc.
+
+   Contributed by Intel Corp. <markus.t.metzger@intel.com>
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+#include <pthread.h>
+
+static pthread_barrier_t barrier;
+static int global;
+
+static void *
+test (void *arg)
+{
+  pthread_barrier_wait (&barrier);
+
+  global = 42; /* bp.1 */
+
+  pthread_barrier_wait (&barrier);
+
+  global = 42; /* bp.2 */
+
+  return arg;
+}
+
+int
+main (void)
+{
+  pthread_t th;
+
+  pthread_barrier_init (&barrier, NULL, 2);
+  pthread_create (&th, NULL, test, NULL);
+
+  test (NULL);
+
+  pthread_join (th, NULL);
+  pthread_barrier_destroy (&barrier);
+
+  return 0; /* bp.3 */
+}
diff --git a/gdb/testsuite/gdb.btrace/multi-thread-step.exp b/gdb/testsuite/gdb.btrace/multi-thread-step.exp
new file mode 100644
index 0000000..bb88e13
--- /dev/null
+++ b/gdb/testsuite/gdb.btrace/multi-thread-step.exp
@@ -0,0 +1,84 @@
+# This testcase is part of GDB, the GNU debugger.
+#
+# Copyright 2013 Free Software Foundation, Inc.
+#
+# Contributed by Intel Corp. <markus.t.metzger@intel.com>
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# check for btrace support
+if { [skip_btrace_tests] } { return -1 }
+
+# start inferior
+standard_testfile
+if {[gdb_compile_pthreads "$srcdir/$subdir/$srcfile" "$binfile" executable {debug}] != "" } {
+    return -1
+}
+clean_restart $testfile
+
+if ![runto_main] {
+    return -1
+}
+
+# set up breakpoints
+set bp_1 [gdb_get_line_number "bp.1" $srcfile]
+set bp_2 [gdb_get_line_number "bp.2" $srcfile]
+set bp_3 [gdb_get_line_number "bp.3" $srcfile]
+
+proc gdb_cont_to_line { line test } {
+	gdb_breakpoint $line
+	gdb_continue_to_breakpoint "$test - $line" ".*$line.*"
+	delete_breakpoints
+}
+
+# trace the code between the two breakpoints
+delete_breakpoints
+gdb_cont_to_line $srcfile:$bp_1 "mts, 0.1"
+# make sure GDB knows about the new thread
+gdb_test "info threads" ".*" "mts, 0.2"
+gdb_test_no_output "record btrace" "mts, 0.3"
+gdb_cont_to_line $srcfile:$bp_2 "mts, 0.4"
+
+# navigate in the trace history for both threads
+gdb_test "thread 1" ".*" "mts, 1.1"
+gdb_test "record goto begin" ".*" "mts, 1.2"
+gdb_test "info record" ".*Replay in progress\.  At instruction 1\." "mts, 1.3"
+gdb_test "thread 2" ".*" "mts, 1.4"
+gdb_test "record goto begin" ".*" "mts, 1.5"
+gdb_test "info record" ".*Replay in progress\.  At instruction 1\." "mts, 1.6"
+
+# step both threads
+gdb_test "thread 1" ".*" "mts, 2.1"
+gdb_test "info record" ".*Replay in progress\.  At instruction 1\." "mts, 2.2"
+gdb_test "stepi" ".*" "mts, 2.3"
+gdb_test "info record" ".*Replay in progress\.  At instruction 2\." "mts, 2.4"
+gdb_test "thread 2" ".*" "mts, 2.5"
+gdb_test "info record" ".*Replay in progress\.  At instruction 1\." "mts, 2.6"
+gdb_test "stepi" ".*" "mts, 2.7"
+gdb_test "info record" ".*Replay in progress\.  At instruction 2\." "mts, 2.8"
+
+# run to the end of the history for both threads
+gdb_test "thread 1" ".*" "mts, 3.1"
+gdb_test "info record" ".*Replay in progress\.  At instruction 2\." "mts, 3.2"
+gdb_test "continue" "No more reverse-execution history.*" "mts, 3.3"
+gdb_test "thread 2" ".*" "mts, 3.4"
+gdb_test "info record" ".*Replay in progress\.  At instruction 2\." "mts, 3.5"
+gdb_test "continue" "No more reverse-execution history.*" "mts, 3.6"
+
+# navigate back into the history for thread 1 and continue thread 2
+gdb_test "thread 1" ".*" "mts, 4.1"
+gdb_test "record goto begin" ".*" "mts, 4.2"
+gdb_test "info record" ".*Replay in progress\.  At instruction 1\." "mts, 4.3"
+gdb_test "thread 2" ".*" "mts, 4.4"
+gdb_cont_to_line $srcfile:$bp_3 "mts, 4.5"
diff --git a/gdb/testsuite/gdb.btrace/next.exp b/gdb/testsuite/gdb.btrace/next.exp
new file mode 100644
index 0000000..12a5e8e
--- /dev/null
+++ b/gdb/testsuite/gdb.btrace/next.exp
@@ -0,0 +1,89 @@
+# This testcase is part of GDB, the GNU debugger.
+#
+# Copyright 2013 Free Software Foundation, Inc.
+#
+# Contributed by Intel Corp. <markus.t.metzger@intel.com>
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# check for btrace support
+if { [skip_btrace_tests] } { return -1 }
+
+# start inferior
+standard_testfile x86-record_goto.S
+if [prepare_for_testing next.exp $testfile $srcfile] {
+    return -1
+}
+
+if ![runto_main] {
+    return -1
+}
+
+# trace the call to the test function
+gdb_test_no_output "record btrace"
+gdb_test "next"
+
+# we start with stepping to make sure that the trace is fetched automatically
+# the call is outside of our trace
+gdb_test "reverse-next" ".*main\.2.*" "next, 1.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 1\." "next, 1.2"
+
+# we can't reverse-step any further
+gdb_test "reverse-next" "No more reverse-execution history\.\r
+.*main\.2.*" "next, 1.3"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 1\." "next, 1.4"
+
+# but we can step back again
+gdb_test "next" ".*main\.3.*" "next, 1.5"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r" "next, 1.6"
+
+# let's go somewhere where we can step some more
+gdb_test "record goto 22" ".*fun3\.2.*" "next, 2.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 22\." "next, 2.2"
+
+gdb_test "next" ".*fun3\.3.*" "next, 2.3"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 27\." "next, 2.4"
+
+gdb_test "next" ".*fun3\.4.*" "next, 2.5"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 37\." "next, 2.6"
+
+# and back again
+gdb_test "reverse-next" ".*fun3\.3.*" "next, 3.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 27\." "next, 3.2"
+
+gdb_test "reverse-next" ".*fun3\.2.*" "next, 3.3"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 22\." "next, 3.4"
diff --git a/gdb/testsuite/gdb.btrace/nexti.exp b/gdb/testsuite/gdb.btrace/nexti.exp
new file mode 100644
index 0000000..559a9b7
--- /dev/null
+++ b/gdb/testsuite/gdb.btrace/nexti.exp
@@ -0,0 +1,89 @@
+# This testcase is part of GDB, the GNU debugger.
+#
+# Copyright 2013 Free Software Foundation, Inc.
+#
+# Contributed by Intel Corp. <markus.t.metzger@intel.com>
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# check for btrace support
+if { [skip_btrace_tests] } { return -1 }
+
+# start inferior
+standard_testfile x86-record_goto.S
+if [prepare_for_testing nexti.exp $testfile $srcfile] {
+    return -1
+}
+
+if ![runto_main] {
+    return -1
+}
+
+# trace the call to the test function
+gdb_test_no_output "record btrace"
+gdb_test "next"
+
+# we start with stepping to make sure that the trace is fetched automatically
+# the call is outside of our trace
+gdb_test "reverse-nexti" ".*main\.2.*" "nexti, 1.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 1\." "nexti, 1.2"
+
+# we can't reverse-step any further
+gdb_test "reverse-nexti" "No more reverse-execution history\.\r
+.*main\.2.*" "nexti, 1.3"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 1\." "nexti, 1.4"
+
+# but we can step back again
+gdb_test "nexti" ".*main\.3.*" "next, 1.5"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r" "next, 1.6"
+
+# let's go somewhere where we can step some more
+gdb_test "record goto 22" ".*fun3\.2.*" "nexti, 2.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 22\." "nexti, 2.2"
+
+gdb_test "nexti" ".*fun3\.3.*" "nexti, 2.3"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 27\." "nexti, 2.4"
+
+gdb_test "nexti" ".*fun3\.4.*" "nexti, 2.5"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 37\." "nexti, 2.6"
+
+# and back again
+gdb_test "reverse-nexti" ".*fun3\.3.*" "nexti, 3.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 27\." "nexti, 3.2"
+
+gdb_test "reverse-nexti" ".*fun3\.2.*" "nexti, 3.3"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 22\." "nexti, 3.4"
diff --git a/gdb/testsuite/gdb.btrace/record_goto.c b/gdb/testsuite/gdb.btrace/record_goto.c
index 1250708..90537f9 100644
--- a/gdb/testsuite/gdb.btrace/record_goto.c
+++ b/gdb/testsuite/gdb.btrace/record_goto.c
@@ -19,33 +19,33 @@
 
 void
 fun1 (void)
-{
-}
+{		/* fun1.1 */
+}		/* fun1.2 */
 
 void
 fun2 (void)
-{
-  fun1 ();
-}
+{		/* fun2.1 */
+  fun1 ();	/* fun2.2 */
+}		/* fun2.3 */
 
 void
 fun3 (void)
-{
-  fun1 ();
-  fun2 ();
-}
+{		/* fun3.1 */
+  fun1 ();	/* fun3.2 */
+  fun2 ();	/* fun3.3 */
+}		/* fun3.4 */
 
 void
 fun4 (void)
-{
-  fun1 ();
-  fun2 ();
-  fun3 ();
-}
+{		/* fun4.1 */
+  fun1 ();	/* fun4.2 */
+  fun2 ();	/* fun4.3 */
+  fun3 ();	/* fun4.4 */
+}		/* fun4.5 */
 
 int
 main (void)
-{
-  fun4 ();
-  return 0;
-}
+{		/* main.1 */
+  fun4 ();	/* main.2 */
+  return 0;	/* main.3 */
+}		/* main.4 */
diff --git a/gdb/testsuite/gdb.btrace/step.exp b/gdb/testsuite/gdb.btrace/step.exp
new file mode 100644
index 0000000..bb8942e
--- /dev/null
+++ b/gdb/testsuite/gdb.btrace/step.exp
@@ -0,0 +1,113 @@
+# This testcase is part of GDB, the GNU debugger.
+#
+# Copyright 2013 Free Software Foundation, Inc.
+#
+# Contributed by Intel Corp. <markus.t.metzger@intel.com>
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# check for btrace support
+if { [skip_btrace_tests] } { return -1 }
+
+# start inferior
+standard_testfile x86-record_goto.S
+if [prepare_for_testing step.exp $testfile $srcfile] {
+    return -1
+}
+
+if ![runto_main] {
+    return -1
+}
+
+# trace the call to the test function
+gdb_test_no_output "record btrace"
+gdb_test "next"
+
+# let's start by stepping back into the function we just returned from
+gdb_test "reverse-step" ".*fun4\.5.*" "step, 1.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 39\." "step, 1.2"
+
+# again
+gdb_test "reverse-step" ".*fun3\.4.*" "step, 2.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 37\." "step, 2.2"
+
+# and again
+gdb_test "reverse-step" ".*fun2\.3.*" "step, 3.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 35\." "step, 3.2"
+
+# once more
+gdb_test "reverse-step" ".*fun1\.2.*" "step, 4.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 33\." "step, 4.2"
+
+# and out again the other side
+gdb_test "reverse-step" ".*fun2\.2.*" "step, 5.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 30\." "step, 5.2"
+
+# once again
+gdb_test "reverse-step" ".*fun3\.3.*" "step, 6.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 27\." "step, 6.2"
+
+# and back the way we came
+gdb_test "step" ".*fun2\.2.*" "step, 7.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 30\." "step, 7.2"
+
+gdb_test "step" ".*fun1\.2.*" "step, 8.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 33\." "step, 8.2"
+
+gdb_test "step" ".*fun2\.3.*" "step, 9.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 35\." "step, 9.2"
+
+gdb_test "step" ".*fun3\.4.*" "step, 10.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 37\." "step, 10.2"
+
+gdb_test "step" ".*fun4\.5.*" "step, 11.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 39\." "step, 11.2"
+
+gdb_test "step" ".*main\.3.*" "step, 12.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r" "step, 12.2"
diff --git a/gdb/testsuite/gdb.btrace/stepi.exp b/gdb/testsuite/gdb.btrace/stepi.exp
new file mode 100644
index 0000000..22f1574
--- /dev/null
+++ b/gdb/testsuite/gdb.btrace/stepi.exp
@@ -0,0 +1,114 @@
+# This testcase is part of GDB, the GNU debugger.
+#
+# Copyright 2013 Free Software Foundation, Inc.
+#
+# Contributed by Intel Corp. <markus.t.metzger@intel.com>
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# check for btrace support
+if { [skip_btrace_tests] } { return -1 }
+
+# start inferior
+standard_testfile x86-record_goto.S
+if [prepare_for_testing stepi.exp $testfile $srcfile] {
+    return -1
+}
+
+global gdb_prompt
+
+if ![runto_main] {
+    return -1
+}
+
+# trace the call to the test function
+gdb_test_no_output "record btrace"
+gdb_test "next"
+
+# we start with stepping to make sure that the trace is fetched automatically
+gdb_test "reverse-stepi" ".*fun4\.5.*" "stepi, 1.1"
+gdb_test "reverse-stepi" ".*fun4\.5.*" "stepi, 1.2"
+
+# let's check where we are in the trace
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 39\." "stepi, 1.3"
+
+# let's step forward and check again
+gdb_test "stepi" ".*fun4\.5.*" "stepi, 2.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 40\." "stepi, 2.2"
+
+# with the next step, we stop replaying
+gdb_test "stepi" ".*main\.3.*" "stepi, 2.3"
+gdb_test_multiple "info record" "stepi, 2.4" {
+	-re "Replay in progress.*$gdb_prompt $" { fail "stepi, 2.4" }
+	-re ".*$gdb_prompt $" { pass "stepi, 2.4" }
+}
+
+# let's step from a goto position somewhere in the middle
+gdb_test "record goto 22" ".*fun3\.2.*" "stepi, 3.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 22\." "stepi, 3.2"
+gdb_test "stepi" ".*fun1\.1.*" "stepi, 3.3"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 23\." "stepi, 3.4"
+
+# and back again
+gdb_test "reverse-stepi" ".*fun3\.2.*" "stepi, 4.1"
+gdb_test "reverse-stepi" ".*fun3\.1.*" "stepi, 4.2"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 21\." "stepi, 4.3"
+
+# let's try to step off the left end
+gdb_test "record goto begin" ".*main\.2.*" "stepi, 5.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 1\." "stepi, 5.2"
+gdb_test "reverse-stepi" "No more reverse-execution history\.\r
+.*main\.2.*" "stepi, 5.3"
+gdb_test "reverse-stepi" "No more reverse-execution history\.\r
+.*main\.2.*" "stepi, 5.4"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 1\." "stepi, 5.5"
+
+# we can step forward, though
+gdb_test "stepi" ".*fun4\.1.*" "stepi, 6.1"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 2\." "stepi, 6.2"
+
+# let's try to step off the left end again
+gdb_test "reverse-stepi" ".*main\.2.*" "stepi, 7.1"
+gdb_test "reverse-stepi" "No more reverse-execution history\.\r
+.*main\.2.*" "stepi, 7.2"
+gdb_test "reverse-stepi" "No more reverse-execution history\.\r
+.*main\.2.*" "stepi, 7.3"
+gdb_test "info record" "
+Active record target: record-btrace\r
+Recorded 40 instructions in 16 functions for .*\r
+Replay in progress\.  At instruction 1\." "stepi, 7.4"
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 06/24] btrace: increase buffer size
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
                   ` (19 preceding siblings ...)
  2013-07-03  9:15 ` [patch v4 17/24] record-btrace: add record goto target methods Markus Metzger
@ 2013-07-03  9:15 ` Markus Metzger
  2013-08-18 19:06   ` Jan Kratochvil
  2013-07-03  9:15 ` [patch v4 15/24] record-btrace: add to_wait and to_resume target methods Markus Metzger
                   ` (3 subsequent siblings)
  24 siblings, 1 reply; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:15 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches

Try to allocate as much buffer as we can for each thread with a maximum
of 4MB.

2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>

	* common/linux-btrace.c (linux_enable_btrace): Increase buffer.


---
 gdb/common/linux-btrace.c |   25 +++++++++++++++----------
 1 files changed, 15 insertions(+), 10 deletions(-)

diff --git a/gdb/common/linux-btrace.c b/gdb/common/linux-btrace.c
index b874c84..4880f41 100644
--- a/gdb/common/linux-btrace.c
+++ b/gdb/common/linux-btrace.c
@@ -420,7 +420,7 @@ struct btrace_target_info *
 linux_enable_btrace (ptid_t ptid)
 {
   struct btrace_target_info *tinfo;
-  int pid;
+  int pid, pg;
 
   tinfo = xzalloc (sizeof (*tinfo));
   tinfo->ptid = ptid;
@@ -448,17 +448,22 @@ linux_enable_btrace (ptid_t ptid)
   if (tinfo->file < 0)
     goto err;
 
-  /* We hard-code the trace buffer size.
-     At some later time, we should make this configurable.  */
-  tinfo->size = 1;
-  tinfo->buffer = mmap (NULL, perf_event_mmap_size (tinfo),
-			PROT_READ, MAP_SHARED, tinfo->file, 0);
-  if (tinfo->buffer == MAP_FAILED)
-    goto err_file;
+  /* We try to allocate as much buffer as we can get.
+     We could allow the user to specify the size of the buffer, but then
+     we'd leave this search for the maximum buffer size to him.  */
+  for (pg = 10; pg >= 0; --pg)
+    {
+      /* The number of pages we request needs to be a power of two.  */
+      tinfo->size = 1 << pg;
+      tinfo->buffer = mmap (NULL, perf_event_mmap_size (tinfo),
+			    PROT_READ, MAP_SHARED, tinfo->file, 0);
+      if (tinfo->buffer == MAP_FAILED)
+	continue;
 
-  return tinfo;
+      return tinfo;
+    }
 
- err_file:
+  /* We were not able to allocate any buffer.  */
   close (tinfo->file);
 
  err:
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 04/24] record-btrace: fix insn range in function call history
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
                   ` (21 preceding siblings ...)
  2013-07-03  9:15 ` [patch v4 15/24] record-btrace: add to_wait and to_resume target methods Markus Metzger
@ 2013-07-03  9:15 ` Markus Metzger
  2013-08-18 19:06   ` Jan Kratochvil
  2013-07-03  9:15 ` [patch v4 21/24] record-btrace: show trace from enable location Markus Metzger
  2013-08-18 19:04 ` [patch v4 00/24] record-btrace: reverse Jan Kratochvil
  24 siblings, 1 reply; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:15 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches

With the "/i" modifier, we print the instruction number range in the
"record function-call-history" command as [begin, end).

It would be more intuitive if we printed the range as [begin, end].

2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>

	* record-btrace.c (btrace_call_history_insn_range): Print
	insn range as [begin, end].


---
 gdb/record-btrace.c |    8 ++++++--
 1 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
index 2e7c639..d9a2ba7 100644
--- a/gdb/record-btrace.c
+++ b/gdb/record-btrace.c
@@ -425,10 +425,14 @@ static void
 btrace_call_history_insn_range (struct ui_out *uiout,
 				const struct btrace_function *bfun)
 {
-  unsigned int begin, end;
+  unsigned int begin, end, size;
+
+  size = VEC_length (btrace_insn_s, bfun->insn);
+  if (size == 0)
+    return;
 
   begin = bfun->insn_offset;
-  end = begin + VEC_length (btrace_insn_s, bfun->insn);
+  end = begin + size - 1;
 
   ui_out_field_uint (uiout, "insn begin", begin);
   ui_out_text (uiout, "-");
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 18/24] record-btrace: extend unwinder
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
                   ` (14 preceding siblings ...)
  2013-07-03  9:15 ` [patch v4 13/24] record-btrace, frame: supply target-specific unwinder Markus Metzger
@ 2013-07-03  9:15 ` Markus Metzger
  2013-08-18 19:08   ` Jan Kratochvil
  2013-07-03  9:15 ` [patch v4 23/24] record-btrace: add (reverse-)stepping support Markus Metzger
                   ` (8 subsequent siblings)
  24 siblings, 1 reply; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:15 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches

Extend the always failing unwinder to provide the PC based on the call structure
detected in the branch trace.

There are several open points:

An assertion in get_frame_id at frame.c:340 requires that a frame provides a
stack address.  The record-btrace unwinder can't provide this since the trace
does not contain data.  I incorrectly set stack_addr_p to 1 to avoid the
assertion.

When evaluating arguments for printing the stack back trace, there's an ugly
error displayed: "error reading variable: can't compute CFA for this frame".
The error is correct, we can't compute the CFA since we don't have the stack at
that time, but it is rather annoying at this place and makes the back trace
difficult to read.

Now that we set the PC to a different value and provide a fake unwinder, we have
the potential to affect almost every other command.  How can this be tested
sufficiently?  I added a few tests for the intended functionality, but nothing
so far to ensure that it does not break some other command when used in this
context.

Reviewed-by: Eli Zaretskii  <eliz@gnu.org>
2013-04-24  Markus Metzger  <markus.t.metzger@intel.com>

	* frame.h (enum frame_type) <BTRACE_FRAME>: New.
	* record-btrace.c: Include hashtab.h.
	(btrace_get_bfun_name): New.
	(btrace_call_history): Call btrace_get_bfun_name.
	(struct btrace_frame_cache): New.
	(bfcache): New.
	(bfcache_hash, bfcache_eq, bfcache_new): New.
	(btrace_get_frame_function): New.
	(record_btrace_frame_unwind_stop_reason): Allow unwinding.
	(record_btrace_frame_this_id): Compute own id.
	(record_btrace_frame_prev_register): Provide PC, throw_error
	for all other registers.
	(record_btrace_frame_sniffer): Detect btrace frames.
	(record_btrace_frame_dealloc_cache): New.
	(record_btrace_frame_unwind): Add new functions.
	(_initialize_record_btrace): Allocate cache.
	* btrace.c (btrace_clear): Call reinit_frame_cache.
	* NEWS: Announce it.

testsuite/
	* gdb.btrace/record_goto.exp: Add backtrace test.
	* gdb.btrace/tailcall.exp: Add backtrace test.


---
 gdb/NEWS                                 |    2 +
 gdb/btrace.c                             |    4 +
 gdb/frame.h                              |    4 +-
 gdb/record-btrace.c                      |  259 +++++++++++++++++++++++++++---
 gdb/testsuite/gdb.btrace/record_goto.exp |   13 ++
 gdb/testsuite/gdb.btrace/tailcall.exp    |   17 ++
 6 files changed, 279 insertions(+), 20 deletions(-)

diff --git a/gdb/NEWS b/gdb/NEWS
index bfe4dd4..9b9de71 100644
--- a/gdb/NEWS
+++ b/gdb/NEWS
@@ -14,6 +14,8 @@ Nios II GNU/Linux		nios2*-*-linux
 Texas Instruments MSP430	msp430*-*-elf
 
 * The btrace record target supports the 'record goto' command.
+  For locations inside the execution trace, the back trace is computed
+  based on the information stored in the execution trace.
 
 * The command 'record function-call-history' supports a new modifier '/c' to
   indent the function names based on their call stack depth.
diff --git a/gdb/btrace.c b/gdb/btrace.c
index 0bec2cf..822926c 100644
--- a/gdb/btrace.c
+++ b/gdb/btrace.c
@@ -755,6 +755,10 @@ btrace_clear (struct thread_info *tp)
 
   DEBUG ("clear thread %d (%s)", tp->num, target_pid_to_str (tp->ptid));
 
+  /* Make sure btrace frames that may hold a pointer into the branch
+     trace data are destroyed.  */
+  reinit_frame_cache ();
+
   btinfo = &tp->btrace;
 
   it = btinfo->begin;
diff --git a/gdb/frame.h b/gdb/frame.h
index 31b9cb7..db4cc52 100644
--- a/gdb/frame.h
+++ b/gdb/frame.h
@@ -216,7 +216,9 @@ enum frame_type
   ARCH_FRAME,
   /* Sentinel or registers frame.  This frame obtains register values
      direct from the inferior's registers.  */
-  SENTINEL_FRAME
+  SENTINEL_FRAME,
+  /* A branch tracing frame.  */
+  BTRACE_FRAME
 };
 
 /* For every stopped thread, GDB tracks two frames: current and
diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
index d6508bd..a528f8b 100644
--- a/gdb/record-btrace.c
+++ b/gdb/record-btrace.c
@@ -34,6 +34,7 @@
 #include "filenames.h"
 #include "regcache.h"
 #include "frame-unwind.h"
+#include "hashtab.h"
 
 /* The target_ops of record-btrace.  */
 static struct target_ops record_btrace_ops;
@@ -507,6 +508,28 @@ btrace_call_history_src_line (struct ui_out *uiout,
   ui_out_field_int (uiout, "max line", end);
 }
 
+/* Get the name of a branch trace function.  */
+
+static const char *
+btrace_get_bfun_name (const struct btrace_function *bfun)
+{
+  struct minimal_symbol *msym;
+  struct symbol *sym;
+
+  if (bfun == NULL)
+    return "<none>";
+
+  msym = bfun->msym;
+  sym = bfun->sym;
+
+  if (sym != NULL)
+    return SYMBOL_PRINT_NAME (sym);
+  else if (msym != NULL)
+    return SYMBOL_PRINT_NAME (msym);
+  else
+    return "<unknown>";
+}
+
 /* Disassemble a section of the recorded function trace.  */
 
 static void
@@ -524,12 +547,8 @@ btrace_call_history (struct ui_out *uiout,
   for (it = *begin; btrace_call_cmp (&it, end) != 0; btrace_call_next (&it, 1))
     {
       const struct btrace_function *bfun;
-      struct minimal_symbol *msym;
-      struct symbol *sym;
 
       bfun = btrace_call_get (&it);
-      msym = bfun->msym;
-      sym = bfun->sym;
 
       /* Print the function index.  */
       ui_out_field_uint (uiout, "index", bfun->number);
@@ -543,12 +562,7 @@ btrace_call_history (struct ui_out *uiout,
 	    ui_out_text (uiout, "  ");
 	}
 
-      if (sym != NULL)
-	ui_out_field_string (uiout, "function", SYMBOL_PRINT_NAME (sym));
-      else if (msym != NULL)
-	ui_out_field_string (uiout, "function", SYMBOL_PRINT_NAME (msym));
-      else
-	ui_out_field_string (uiout, "function", "<unknown>");
+      ui_out_field_string (uiout, "function", btrace_get_bfun_name (bfun));
 
       if ((flags & RECORD_PRINT_INSN_RANGE) != 0)
 	{
@@ -902,13 +916,100 @@ record_btrace_prepare_to_store (struct target_ops *ops,
       }
 }
 
+/* The branch trace frame cache.  */
+
+struct btrace_frame_cache
+{
+  /* The thread.  */
+  struct thread_info *tp;
+
+  /* The frame info.  */
+  struct frame_info *frame;
+
+  /* The branch trace function segment.  */
+  const struct btrace_function *bfun;
+
+  /* The return PC into this frame.  */
+  CORE_ADDR pc;
+};
+
+/* A struct btrace_frame_cache hash table indexed by NEXT.  */
+
+static htab_t bfcache;
+
+/* hash_f for htab_create_alloc of bfcache.  */
+
+static hashval_t
+bfcache_hash (const void *arg)
+{
+  const struct btrace_frame_cache *cache = arg;
+
+  return htab_hash_pointer (cache->frame);
+}
+
+/* eq_f for htab_create_alloc of bfcache.  */
+
+static int
+bfcache_eq (const void *arg1, const void *arg2)
+{
+  const struct btrace_frame_cache *cache1 = arg1;
+  const struct btrace_frame_cache *cache2 = arg2;
+
+  return cache1->frame == cache2->frame;
+}
+
+/* Create a new btrace frame cache.  */
+
+static struct btrace_frame_cache *
+bfcache_new (struct frame_info *frame)
+{
+  struct btrace_frame_cache *cache;
+  void **slot;
+
+  cache = FRAME_OBSTACK_ZALLOC (struct btrace_frame_cache);
+  cache->frame = frame;
+
+  slot = htab_find_slot (bfcache, cache, INSERT);
+  gdb_assert (*slot == NULL);
+  *slot = cache;
+
+  return cache;
+}
+
+/* Extract the branch trace function from a branch trace frame.  */
+
+static const struct btrace_function *
+btrace_get_frame_function (struct frame_info *frame)
+{
+  const struct btrace_frame_cache *cache;
+  const struct btrace_function *bfun;
+  struct btrace_frame_cache pattern;
+  void **slot;
+
+  pattern.frame = frame;
+
+  slot = htab_find_slot (bfcache, &pattern, NO_INSERT);
+  if (slot == NULL)
+    return NULL;
+
+  cache = *slot;
+  return cache->bfun;
+}
+
 /* Implement stop_reason method for record_btrace_frame_unwind.  */
 
 static enum unwind_stop_reason
 record_btrace_frame_unwind_stop_reason (struct frame_info *this_frame,
 					void **this_cache)
 {
-  return UNWIND_UNAVAILABLE;
+  const struct btrace_frame_cache *cache;
+
+  cache = *this_cache;
+
+  if (cache->bfun == NULL)
+    return UNWIND_UNAVAILABLE;
+
+  return UNWIND_NO_REASON;
 }
 
 /* Implement this_id method for record_btrace_frame_unwind.  */
@@ -917,7 +1018,21 @@ static void
 record_btrace_frame_this_id (struct frame_info *this_frame, void **this_cache,
 			     struct frame_id *this_id)
 {
-  /* Leave there the outer_frame_id value.  */
+  const struct btrace_frame_cache *cache;
+  CORE_ADDR stack, code, special;
+
+  cache = *this_cache;
+
+  stack = 0;
+  code = get_frame_func (this_frame);
+  special = (CORE_ADDR) cache->bfun;
+
+  *this_id = frame_id_build_special (stack, code, special);
+
+  DEBUG ("[frame] %s id: (!stack, pc=%s, special=%s)",
+	 btrace_get_bfun_name (cache->bfun),
+	 core_addr_to_string_nz (this_id->code_addr),
+	 core_addr_to_string_nz (this_id->special_addr));
 }
 
 /* Implement prev_register method for record_btrace_frame_unwind.  */
@@ -927,8 +1042,31 @@ record_btrace_frame_prev_register (struct frame_info *this_frame,
 				   void **this_cache,
 				   int regnum)
 {
-  throw_error (NOT_AVAILABLE_ERROR,
-              _("Registers are not available in btrace record history"));
+  const struct btrace_frame_cache *cache;
+  const struct btrace_function *bfun;
+  struct gdbarch *gdbarch;
+  CORE_ADDR pc;
+  int pcreg;
+
+  gdbarch = get_frame_arch (this_frame);
+  pcreg = gdbarch_pc_regnum (gdbarch);
+  if (pcreg < 0 || regnum != pcreg)
+    throw_error (NOT_AVAILABLE_ERROR,
+		 _("Registers are not available in btrace record history"));
+
+  cache = *this_cache;
+  bfun = cache->bfun;
+  if (bfun == NULL)
+    throw_error (NOT_AVAILABLE_ERROR,
+		 _("Registers are not available in btrace record history"));
+
+  pc = cache->pc;
+
+  DEBUG ("[frame] unwound PC for %s on level %d: %s",
+	 btrace_get_bfun_name (bfun), bfun->level,
+	 core_addr_to_string_nz (pc));
+
+  return frame_unwind_got_address (this_frame, regnum, pc);
 }
 
 /* Implement sniffer method for record_btrace_frame_unwind.  */
@@ -938,9 +1076,14 @@ record_btrace_frame_sniffer (const struct frame_unwind *self,
 			     struct frame_info *this_frame,
 			     void **this_cache)
 {
+  const struct btrace_thread_info *btinfo;
+  const struct btrace_insn_iterator *replay;
+  const struct btrace_insn *insn;
+  const struct btrace_function *bfun, *caller;
+  struct btrace_frame_cache *cache;
   struct thread_info *tp;
-  struct btrace_thread_info *btinfo;
-  struct btrace_insn_iterator *replay;
+  struct frame_info *next;
+  CORE_ADDR pc;
 
   /* This doesn't seem right.  Yet, I don't see how I could get from a frame
      to its thread.  */
@@ -948,7 +1091,81 @@ record_btrace_frame_sniffer (const struct frame_unwind *self,
   if (tp == NULL)
     return 0;
 
-  return btrace_is_replaying (tp);
+  replay = tp->btrace.replay;
+  if (replay == NULL)
+    return 0;
+
+  /* Find the next frame's branch trace function.  */
+  next = get_next_frame (this_frame);
+  if (next == NULL)
+    {
+      /* The sentinel frame below corresponds to our replay position.  */
+      bfun = replay->function;
+    }
+  else
+    {
+      /* This is an outer frame.  It must be the predecessor of another
+	 branch trace frame.  Let's get this frame's branch trace function
+	 so we can compute our own.  */
+      bfun = btrace_get_frame_function (next);
+    }
+
+  /* If we did not find a branch trace function, this is not our frame.  */
+  if (bfun == NULL)
+    return 0;
+
+  /* Go up to the calling function segment.  */
+  caller = bfun->up;
+  pc = 0;
+
+  /* Determine where to find the PC in the upper function segment.  */
+  if (caller != NULL)
+    {
+      if ((bfun->flags & BFUN_UP_LINKS_TO_RET) != 0)
+	{
+	  insn = VEC_index (btrace_insn_s, caller->insn, 0);
+	  pc = insn->pc;
+	}
+      else
+	{
+	  insn = VEC_last (btrace_insn_s, caller->insn);
+	  pc = insn->pc;
+
+	  /* We link directly to the jump instruction in the case of a tail
+	     call, since the next instruction will likely be outside of the
+	     caller function.  */
+	  if ((bfun->flags & BFUN_UP_LINKS_TO_TAILCALL) == 0)
+	    pc += gdb_insn_length (get_frame_arch (this_frame), pc);
+	}
+
+      DEBUG ("[frame] sniffed frame for %s on level %d",
+	     btrace_get_bfun_name (caller), caller->level);
+    }
+  else
+    DEBUG ("[frame] sniffed top btrace frame");
+
+  /* This is our frame.  Initialize the frame cache.  */
+  cache = bfcache_new (this_frame);
+  cache->tp = tp;
+  cache->bfun = caller;
+  cache->pc = pc;
+
+  *this_cache = cache;
+  return 1;
+}
+
+static void
+record_btrace_frame_dealloc_cache (struct frame_info *self, void *this_cache)
+{
+  struct btrace_frame_cache *cache;
+  void **slot;
+
+  cache = this_cache;
+
+  slot = htab_find_slot (bfcache, cache, NO_INSERT);
+  gdb_assert (slot != NULL);
+
+  htab_remove_elt (bfcache, cache);
 }
 
 /* btrace recording does not store previous memory content, neither the stack
@@ -959,12 +1176,13 @@ record_btrace_frame_sniffer (const struct frame_unwind *self,
 
 static const struct frame_unwind record_btrace_frame_unwind =
 {
-  NORMAL_FRAME,
+  BTRACE_FRAME,
   record_btrace_frame_unwind_stop_reason,
   record_btrace_frame_this_id,
   record_btrace_frame_prev_register,
   NULL,
-  record_btrace_frame_sniffer
+  record_btrace_frame_sniffer,
+  record_btrace_frame_dealloc_cache
 };
 
 /* The to_resume method of target record-btrace.  */
@@ -1178,4 +1396,7 @@ _initialize_record_btrace (void)
 
   init_record_btrace_ops ();
   add_target (&record_btrace_ops);
+
+  bfcache = htab_create_alloc (50, bfcache_hash, bfcache_eq, NULL,
+			       xcalloc, xfree);
 }
diff --git a/gdb/testsuite/gdb.btrace/record_goto.exp b/gdb/testsuite/gdb.btrace/record_goto.exp
index a9f9a64..8477a03 100644
--- a/gdb/testsuite/gdb.btrace/record_goto.exp
+++ b/gdb/testsuite/gdb.btrace/record_goto.exp
@@ -75,6 +75,19 @@ gdb_test "record instruction-history" "
 gdb_test "record goto 26" "
 .*fun3 \\(\\) at record_goto.c:35.*" "record_goto - goto 26"
 
+# check the back trace at that location
+gdb_test "backtrace" "
+#0.*fun3.*at record_goto.c:35.*\r
+#1.*fun4.*at record_goto.c:44.*\r
+#2.*main.*at record_goto.c:51.*\r
+Backtrace stopped: not enough registers or memory available to unwind further" "backtrace at 25"
+
+# walk the backtrace
+gdb_test "up" "
+.*fun4.*at record_goto.c:44.*" "up to fun4"
+gdb_test "up" "
+.*main.*at record_goto.c:51.*" "up to main"
+
 # the function call history should start at the new location
 gdb_test "record function-call-history /ci -" "
 8\t    fun3\tinst 19,21\r
diff --git a/gdb/testsuite/gdb.btrace/tailcall.exp b/gdb/testsuite/gdb.btrace/tailcall.exp
index cf9fdf3..ada4b14 100644
--- a/gdb/testsuite/gdb.btrace/tailcall.exp
+++ b/gdb/testsuite/gdb.btrace/tailcall.exp
@@ -47,3 +47,20 @@ gdb_test "record function-call-history /c 1" "
 1\t  foo\r
 2\t    bar\r
 3\tmain" "tailcall - calls indented"
+
+# go into bar
+gdb_test "record goto 3" "
+.*bar \\(\\) at .*x86-tailcall.c:24.*" "go to bar"
+
+# check the backtrace
+gdb_test "backtrace" "
+#0.*bar.*at .*x86-tailcall.c:24.*\r
+#1.*foo.*at .*x86-tailcall.c:29.*\r
+#2.*main.*at .*x86-tailcall.c:37.*\r
+Backtrace stopped: not enough registers or memory available to unwind further" "backtrace in bar"
+
+# walk the backtrace
+gdb_test "up" "
+.*foo \\(\\) at .*x86-tailcall.c:29.*" "up to foo"
+gdb_test "up" "
+.*main \\(\\) at .*x86-tailcall.c:37.*" "up to main"
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 13/24] record-btrace, frame: supply target-specific unwinder
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
                   ` (13 preceding siblings ...)
  2013-07-03  9:14 ` [patch v4 22/24] infrun: reverse stepping from unknown functions Markus Metzger
@ 2013-07-03  9:15 ` Markus Metzger
  2013-08-18 19:07   ` Jan Kratochvil
  2013-07-03  9:15 ` [patch v4 18/24] record-btrace: extend unwinder Markus Metzger
                   ` (9 subsequent siblings)
  24 siblings, 1 reply; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:15 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches

Supply a target-specific frame unwinder for the record-btrace target that does
not allow unwinding while replaying.

2013-02-11  Jan Kratochvil  <jan.kratochvil@redhat.com>
            Markus Metzger  <markus.t.metzger@intel.com>

gdb/
	* record-btrace.c: Include frame-unwind.h.
	(record_btrace_frame_unwind_stop_reason,
	record_btrace_frame_this_id, record_btrace_frame_prev_register,
	record_btrace_frame_sniffer, record_btrace_frame_unwind):
	New.
	(init_record_btrace_ops): Install it.


---
 gdb/record-btrace.c |   66 +++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 66 insertions(+), 0 deletions(-)

diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
index e9c0801..cb1f3bb 100644
--- a/gdb/record-btrace.c
+++ b/gdb/record-btrace.c
@@ -33,6 +33,7 @@
 #include "symtab.h"
 #include "filenames.h"
 #include "regcache.h"
+#include "frame-unwind.h"
 
 /* The target_ops of record-btrace.  */
 static struct target_ops record_btrace_ops;
@@ -844,6 +845,70 @@ record_btrace_prepare_to_store (struct target_ops *ops,
       }
 }
 
+/* Implement stop_reason method for record_btrace_frame_unwind.  */
+
+static enum unwind_stop_reason
+record_btrace_frame_unwind_stop_reason (struct frame_info *this_frame,
+					void **this_cache)
+{
+  return UNWIND_UNAVAILABLE;
+}
+
+/* Implement this_id method for record_btrace_frame_unwind.  */
+
+static void
+record_btrace_frame_this_id (struct frame_info *this_frame, void **this_cache,
+			     struct frame_id *this_id)
+{
+  /* Leave there the outer_frame_id value.  */
+}
+
+/* Implement prev_register method for record_btrace_frame_unwind.  */
+
+static struct value *
+record_btrace_frame_prev_register (struct frame_info *this_frame,
+				   void **this_cache,
+				   int regnum)
+{
+  throw_error (NOT_AVAILABLE_ERROR,
+              _("Registers are not available in btrace record history"));
+}
+
+/* Implement sniffer method for record_btrace_frame_unwind.  */
+
+static int
+record_btrace_frame_sniffer (const struct frame_unwind *self,
+			     struct frame_info *this_frame,
+			     void **this_cache)
+{
+  struct thread_info *tp;
+  struct btrace_thread_info *btinfo;
+  struct btrace_insn_iterator *replay;
+
+  /* This doesn't seem right.  Yet, I don't see how I could get from a frame
+     to its thread.  */
+  tp = find_thread_ptid (inferior_ptid);
+  if (tp == NULL)
+    return 0;
+
+  return btrace_is_replaying (tp);
+}
+
+/* btrace recording does not store previous memory content, neither the stack
+   frames content.  Any unwinding would return errorneous results as the stack
+   contents no longer matches the changed PC value restored from history.
+   Therefore this unwinder reports any possibly unwound registers as
+   <unavailable>.  */
+
+static const struct frame_unwind record_btrace_frame_unwind =
+{
+  NORMAL_FRAME,
+  record_btrace_frame_unwind_stop_reason,
+  record_btrace_frame_this_id,
+  record_btrace_frame_prev_register,
+  NULL,
+  record_btrace_frame_sniffer
+};
 /* Initialize the record-btrace target ops.  */
 
 static void
@@ -874,6 +939,7 @@ init_record_btrace_ops (void)
   ops->to_fetch_registers = record_btrace_fetch_registers;
   ops->to_store_registers = record_btrace_store_registers;
   ops->to_prepare_to_store = record_btrace_prepare_to_store;
+  ops->to_get_unwinder = &record_btrace_frame_unwind;
   ops->to_stratum = record_stratum;
   ops->to_magic = OPS_MAGIC;
 }
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 15/24] record-btrace: add to_wait and to_resume target methods.
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
                   ` (20 preceding siblings ...)
  2013-07-03  9:15 ` [patch v4 06/24] btrace: increase buffer size Markus Metzger
@ 2013-07-03  9:15 ` Markus Metzger
  2013-08-18 19:08   ` Jan Kratochvil
  2013-07-03  9:15 ` [patch v4 04/24] record-btrace: fix insn range in function call history Markus Metzger
                   ` (2 subsequent siblings)
  24 siblings, 1 reply; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:15 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches

Add simple to_wait and to_resume target methods that prevent stepping when the
current replay position is not at the end of the execution log.

2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>

	* record-btrace.c (record_btrace_resume): New.
	(record_btrace_wait): New.
	(init_record_btrace_ops): Initialize to_wait and to_resume.


---
 gdb/record-btrace.c |   41 +++++++++++++++++++++++++++++++++++++++++
 1 files changed, 41 insertions(+), 0 deletions(-)

diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
index 831a367..430296a 100644
--- a/gdb/record-btrace.c
+++ b/gdb/record-btrace.c
@@ -966,6 +966,45 @@ static const struct frame_unwind record_btrace_frame_unwind =
   NULL,
   record_btrace_frame_sniffer
 };
+
+/* The to_resume method of target record-btrace.  */
+
+static void
+record_btrace_resume (struct target_ops *ops, ptid_t ptid, int step,
+		      enum gdb_signal signal)
+{
+  /* As long as we're not replaying, just forward the request.  */
+  if (!record_btrace_is_replaying ())
+    {
+      for (ops = ops->beneath; ops != NULL; ops = ops->beneath)
+	if (ops->to_resume != NULL)
+	  return ops->to_resume (ops, ptid, step, signal);
+
+      error (_("Cannot find target for stepping."));
+    }
+
+  error (_("You can't do this from here.  Do 'record goto end', first."));
+}
+
+/* The to_wait method of target record-btrace.  */
+
+static ptid_t
+record_btrace_wait (struct target_ops *ops, ptid_t ptid,
+		    struct target_waitstatus *status, int options)
+{
+  /* As long as we're not replaying, just forward the request.  */
+  if (!record_btrace_is_replaying ())
+    {
+      for (ops = ops->beneath; ops != NULL; ops = ops->beneath)
+	if (ops->to_wait != NULL)
+	  return ops->to_wait (ops, ptid, status, options);
+
+      error (_("Cannot find target for stepping."));
+    }
+
+  error (_("You can't do this from here.  Do 'record goto end', first."));
+}
+
 /* Initialize the record-btrace target ops.  */
 
 static void
@@ -998,6 +1037,8 @@ init_record_btrace_ops (void)
   ops->to_store_registers = record_btrace_store_registers;
   ops->to_prepare_to_store = record_btrace_prepare_to_store;
   ops->to_get_unwinder = &record_btrace_frame_unwind;
+  ops->to_resume = record_btrace_resume;
+  ops->to_wait = record_btrace_wait;
   ops->to_stratum = record_stratum;
   ops->to_magic = OPS_MAGIC;
 }
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* [patch v4 01/24] gdbarch: add instruction predicate methods
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
                   ` (17 preceding siblings ...)
  2013-07-03  9:15 ` [patch v4 12/24] frame, backtrace: allow targets to supply a frame unwinder Markus Metzger
@ 2013-07-03  9:15 ` Markus Metzger
  2013-07-03  9:49   ` Mark Kettenis
  2013-08-18 19:04   ` Jan Kratochvil
  2013-07-03  9:15 ` [patch v4 17/24] record-btrace: add record goto target methods Markus Metzger
                   ` (5 subsequent siblings)
  24 siblings, 2 replies; 88+ messages in thread
From: Markus Metzger @ 2013-07-03  9:15 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches

Add new methods to gdbarch for analyzing the instruction at a given address.
Implement those methods for i386 and amd64 architectures.

2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>

	* amd64-tdep.c (amd64_classify_insn_at, amd64_insn_is_call,
	amd64_insn_is_ret, amd64_insn_is_jump, amd64_jmp_p): New.
	(amd64_init_abi): Add insn_is_call, insn_is_ret, and insn_is_jump
	to gdbarch.
	* i386-tdep.c (i386_insn_is_call, i386_insn_is_ret,
	i386_insn_is_jump, i386_jmp_p): New.
	(i386_gdbarch_init): Add insn_is_call, insn_is_ret, and
	insn_is_jump to gdbarch.
	* gdbarch.sh (insn_is_call, insn_is_ret, insn_is_jump): New.
	* gdbarch.h: Regenerated.
	* gdbarch.c: Regenerated.
	* arch-utils.h (default_insn_is_call, default_insn_is_ret,
	default_insn_is_jump): New.
	* arch-utils.c (default_insn_is_call, default_insn_is_ret,
	default_insn_is_jump): New.


---
 gdb/amd64-tdep.c |   67 ++++++++++++++++++++++++++++++++++
 gdb/arch-utils.c |   15 ++++++++
 gdb/arch-utils.h |    4 ++
 gdb/gdbarch.c    |  105 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 gdb/gdbarch.h    |   24 ++++++++++++
 gdb/gdbarch.sh   |    9 +++++
 gdb/i386-tdep.c  |   59 ++++++++++++++++++++++++++++++
 7 files changed, 283 insertions(+), 0 deletions(-)

diff --git a/gdb/amd64-tdep.c b/gdb/amd64-tdep.c
index 3ab74f0..46def57 100644
--- a/gdb/amd64-tdep.c
+++ b/gdb/amd64-tdep.c
@@ -1364,6 +1364,24 @@ amd64_absolute_jmp_p (const struct amd64_insn *details)
   return 0;
 }
 
+/* Return non-zero if the instruction DETAILS is a jump; zero, otherwise.  */
+
+static int
+amd64_jmp_p (const struct amd64_insn *details)
+{
+  const gdb_byte *insn = &details->raw_insn[details->opcode_offset];
+
+  /* jump short, relative.  */
+  if (insn[0] == 0xeb)
+    return 1;
+
+  /* jump near, relative.  */
+  if (insn[0] == 0xe9)
+    return 1;
+
+  return amd64_absolute_jmp_p (details);
+}
+
 static int
 amd64_absolute_call_p (const struct amd64_insn *details)
 {
@@ -1435,6 +1453,52 @@ amd64_syscall_p (const struct amd64_insn *details, int *lengthp)
   return 0;
 }
 
+/* Classify the instruction at ADDR using PRED.
+   Throw an error if the memory can't be read.  */
+
+static int
+amd64_classify_insn_at (struct gdbarch *gdbarch, CORE_ADDR addr,
+			int (*pred) (const struct amd64_insn *))
+{
+  struct amd64_insn details;
+  gdb_byte *buf;
+  int len, classification;
+
+  len = gdbarch_max_insn_length (gdbarch);
+  buf = alloca (len);
+
+  read_memory (addr, buf, len);
+  amd64_get_insn_details (buf, &details);
+
+  classification = pred (&details);
+
+  return classification;
+}
+
+/* The gdbarch insn_is_call method.  */
+
+static int
+amd64_insn_is_call (struct gdbarch *gdbarch, CORE_ADDR addr)
+{
+  return amd64_classify_insn_at (gdbarch, addr, amd64_call_p);
+}
+
+/* The gdbarch insn_is_ret method.  */
+
+static int
+amd64_insn_is_ret (struct gdbarch *gdbarch, CORE_ADDR addr)
+{
+  return amd64_classify_insn_at (gdbarch, addr, amd64_ret_p);
+}
+
+/* The gdbarch insn_is_jump method.  */
+
+static int
+amd64_insn_is_jump (struct gdbarch *gdbarch, CORE_ADDR addr)
+{
+  return amd64_classify_insn_at (gdbarch, addr, amd64_jmp_p);
+}
+
 /* Fix up the state of registers and memory after having single-stepped
    a displaced instruction.  */
 
@@ -2968,6 +3032,9 @@ amd64_init_abi (struct gdbarch_info info, struct gdbarch *gdbarch)
 				      i386_stap_is_single_operand);
   set_gdbarch_stap_parse_special_token (gdbarch,
 					i386_stap_parse_special_token);
+  set_gdbarch_insn_is_call (gdbarch, amd64_insn_is_call);
+  set_gdbarch_insn_is_ret (gdbarch, amd64_insn_is_ret);
+  set_gdbarch_insn_is_jump (gdbarch, amd64_insn_is_jump);
 }
 \f
 
diff --git a/gdb/arch-utils.c b/gdb/arch-utils.c
index 42802a0..851e9e6 100644
--- a/gdb/arch-utils.c
+++ b/gdb/arch-utils.c
@@ -804,6 +804,21 @@ default_return_in_first_hidden_param_p (struct gdbarch *gdbarch,
   return language_pass_by_reference (type);
 }
 
+int default_insn_is_call (struct gdbarch *gdbarch, CORE_ADDR addr)
+{
+  return 0;
+}
+
+int default_insn_is_ret (struct gdbarch *gdbarch, CORE_ADDR addr)
+{
+  return 0;
+}
+
+int default_insn_is_jump (struct gdbarch *gdbarch, CORE_ADDR addr)
+{
+  return 0;
+}
+
 /* */
 
 /* -Wmissing-prototypes */
diff --git a/gdb/arch-utils.h b/gdb/arch-utils.h
index 3f0e64f..2cf83d4 100644
--- a/gdb/arch-utils.h
+++ b/gdb/arch-utils.h
@@ -170,4 +170,8 @@ extern const char *default_auto_wide_charset (void);
 
 extern int default_return_in_first_hidden_param_p (struct gdbarch *,
 						   struct type *);
+
+extern int default_insn_is_call (struct gdbarch *, CORE_ADDR);
+extern int default_insn_is_ret (struct gdbarch *, CORE_ADDR);
+extern int default_insn_is_jump (struct gdbarch *, CORE_ADDR);
 #endif
diff --git a/gdb/gdbarch.c b/gdb/gdbarch.c
index db35b40..6d8a083 100644
--- a/gdb/gdbarch.c
+++ b/gdb/gdbarch.c
@@ -287,6 +287,9 @@ struct gdbarch
   gdbarch_core_info_proc_ftype *core_info_proc;
   gdbarch_iterate_over_objfiles_in_search_order_ftype *iterate_over_objfiles_in_search_order;
   struct ravenscar_arch_ops * ravenscar_ops;
+  gdbarch_insn_is_call_ftype *insn_is_call;
+  gdbarch_insn_is_ret_ftype *insn_is_ret;
+  gdbarch_insn_is_jump_ftype *insn_is_jump;
 };
 
 
@@ -459,6 +462,9 @@ struct gdbarch startup_gdbarch =
   0,  /* core_info_proc */
   default_iterate_over_objfiles_in_search_order,  /* iterate_over_objfiles_in_search_order */
   NULL,  /* ravenscar_ops */
+  0,  /* insn_is_call */
+  0,  /* insn_is_ret */
+  0,  /* insn_is_jump */
   /* startup_gdbarch() */
 };
 
@@ -550,6 +556,9 @@ gdbarch_alloc (const struct gdbarch_info *info,
   gdbarch->gen_return_address = default_gen_return_address;
   gdbarch->iterate_over_objfiles_in_search_order = default_iterate_over_objfiles_in_search_order;
   gdbarch->ravenscar_ops = NULL;
+  gdbarch->insn_is_call = default_insn_is_call;
+  gdbarch->insn_is_ret = default_insn_is_ret;
+  gdbarch->insn_is_jump = default_insn_is_jump;
   /* gdbarch_alloc() */
 
   return gdbarch;
@@ -763,6 +772,9 @@ verify_gdbarch (struct gdbarch *gdbarch)
   /* Skip verify of core_info_proc, has predicate.  */
   /* Skip verify of iterate_over_objfiles_in_search_order, invalid_p == 0 */
   /* Skip verify of ravenscar_ops, invalid_p == 0 */
+  /* Skip verify of insn_is_call, has predicate.  */
+  /* Skip verify of insn_is_ret, has predicate.  */
+  /* Skip verify of insn_is_jump, has predicate.  */
   buf = ui_file_xstrdup (log, &length);
   make_cleanup (xfree, buf);
   if (length > 0)
@@ -1090,6 +1102,24 @@ gdbarch_dump (struct gdbarch *gdbarch, struct ui_file *file)
                       "gdbarch_dump: inner_than = <%s>\n",
                       host_address_to_string (gdbarch->inner_than));
   fprintf_unfiltered (file,
+                      "gdbarch_dump: gdbarch_insn_is_call_p() = %d\n",
+                      gdbarch_insn_is_call_p (gdbarch));
+  fprintf_unfiltered (file,
+                      "gdbarch_dump: insn_is_call = <%s>\n",
+                      host_address_to_string (gdbarch->insn_is_call));
+  fprintf_unfiltered (file,
+                      "gdbarch_dump: gdbarch_insn_is_jump_p() = %d\n",
+                      gdbarch_insn_is_jump_p (gdbarch));
+  fprintf_unfiltered (file,
+                      "gdbarch_dump: insn_is_jump = <%s>\n",
+                      host_address_to_string (gdbarch->insn_is_jump));
+  fprintf_unfiltered (file,
+                      "gdbarch_dump: gdbarch_insn_is_ret_p() = %d\n",
+                      gdbarch_insn_is_ret_p (gdbarch));
+  fprintf_unfiltered (file,
+                      "gdbarch_dump: insn_is_ret = <%s>\n",
+                      host_address_to_string (gdbarch->insn_is_ret));
+  fprintf_unfiltered (file,
                       "gdbarch_dump: int_bit = %s\n",
                       plongest (gdbarch->int_bit));
   fprintf_unfiltered (file,
@@ -4389,6 +4419,81 @@ set_gdbarch_ravenscar_ops (struct gdbarch *gdbarch,
   gdbarch->ravenscar_ops = ravenscar_ops;
 }
 
+int
+gdbarch_insn_is_call_p (struct gdbarch *gdbarch)
+{
+  gdb_assert (gdbarch != NULL);
+  return gdbarch->insn_is_call != default_insn_is_call;
+}
+
+int
+gdbarch_insn_is_call (struct gdbarch *gdbarch, CORE_ADDR addr)
+{
+  gdb_assert (gdbarch != NULL);
+  gdb_assert (gdbarch->insn_is_call != NULL);
+  /* Do not check predicate: gdbarch->insn_is_call != default_insn_is_call, allow call.  */
+  if (gdbarch_debug >= 2)
+    fprintf_unfiltered (gdb_stdlog, "gdbarch_insn_is_call called\n");
+  return gdbarch->insn_is_call (gdbarch, addr);
+}
+
+void
+set_gdbarch_insn_is_call (struct gdbarch *gdbarch,
+                          gdbarch_insn_is_call_ftype insn_is_call)
+{
+  gdbarch->insn_is_call = insn_is_call;
+}
+
+int
+gdbarch_insn_is_ret_p (struct gdbarch *gdbarch)
+{
+  gdb_assert (gdbarch != NULL);
+  return gdbarch->insn_is_ret != default_insn_is_ret;
+}
+
+int
+gdbarch_insn_is_ret (struct gdbarch *gdbarch, CORE_ADDR addr)
+{
+  gdb_assert (gdbarch != NULL);
+  gdb_assert (gdbarch->insn_is_ret != NULL);
+  /* Do not check predicate: gdbarch->insn_is_ret != default_insn_is_ret, allow call.  */
+  if (gdbarch_debug >= 2)
+    fprintf_unfiltered (gdb_stdlog, "gdbarch_insn_is_ret called\n");
+  return gdbarch->insn_is_ret (gdbarch, addr);
+}
+
+void
+set_gdbarch_insn_is_ret (struct gdbarch *gdbarch,
+                         gdbarch_insn_is_ret_ftype insn_is_ret)
+{
+  gdbarch->insn_is_ret = insn_is_ret;
+}
+
+int
+gdbarch_insn_is_jump_p (struct gdbarch *gdbarch)
+{
+  gdb_assert (gdbarch != NULL);
+  return gdbarch->insn_is_jump != default_insn_is_jump;
+}
+
+int
+gdbarch_insn_is_jump (struct gdbarch *gdbarch, CORE_ADDR addr)
+{
+  gdb_assert (gdbarch != NULL);
+  gdb_assert (gdbarch->insn_is_jump != NULL);
+  /* Do not check predicate: gdbarch->insn_is_jump != default_insn_is_jump, allow call.  */
+  if (gdbarch_debug >= 2)
+    fprintf_unfiltered (gdb_stdlog, "gdbarch_insn_is_jump called\n");
+  return gdbarch->insn_is_jump (gdbarch, addr);
+}
+
+void
+set_gdbarch_insn_is_jump (struct gdbarch *gdbarch,
+                          gdbarch_insn_is_jump_ftype insn_is_jump)
+{
+  gdbarch->insn_is_jump = insn_is_jump;
+}
+
 
 /* Keep a registry of per-architecture data-pointers required by GDB
    modules.  */
diff --git a/gdb/gdbarch.h b/gdb/gdbarch.h
index e1959c3..ba40ef6 100644
--- a/gdb/gdbarch.h
+++ b/gdb/gdbarch.h
@@ -1248,6 +1248,30 @@ extern void set_gdbarch_iterate_over_objfiles_in_search_order (struct gdbarch *g
 extern struct ravenscar_arch_ops * gdbarch_ravenscar_ops (struct gdbarch *gdbarch);
 extern void set_gdbarch_ravenscar_ops (struct gdbarch *gdbarch, struct ravenscar_arch_ops * ravenscar_ops);
 
+/* Return non-zero if the instruction at ADDR is a call; zero otherwise. */
+
+extern int gdbarch_insn_is_call_p (struct gdbarch *gdbarch);
+
+typedef int (gdbarch_insn_is_call_ftype) (struct gdbarch *gdbarch, CORE_ADDR addr);
+extern int gdbarch_insn_is_call (struct gdbarch *gdbarch, CORE_ADDR addr);
+extern void set_gdbarch_insn_is_call (struct gdbarch *gdbarch, gdbarch_insn_is_call_ftype *insn_is_call);
+
+/* Return non-zero if the instruction at ADDR is a return; zero otherwise. */
+
+extern int gdbarch_insn_is_ret_p (struct gdbarch *gdbarch);
+
+typedef int (gdbarch_insn_is_ret_ftype) (struct gdbarch *gdbarch, CORE_ADDR addr);
+extern int gdbarch_insn_is_ret (struct gdbarch *gdbarch, CORE_ADDR addr);
+extern void set_gdbarch_insn_is_ret (struct gdbarch *gdbarch, gdbarch_insn_is_ret_ftype *insn_is_ret);
+
+/* Return non-zero if the instruction at ADDR is a jump; zero otherwise. */
+
+extern int gdbarch_insn_is_jump_p (struct gdbarch *gdbarch);
+
+typedef int (gdbarch_insn_is_jump_ftype) (struct gdbarch *gdbarch, CORE_ADDR addr);
+extern int gdbarch_insn_is_jump (struct gdbarch *gdbarch, CORE_ADDR addr);
+extern void set_gdbarch_insn_is_jump (struct gdbarch *gdbarch, gdbarch_insn_is_jump_ftype *insn_is_jump);
+
 /* Definition for an unknown syscall, used basically in error-cases.  */
 #define UNKNOWN_SYSCALL (-1)
 
diff --git a/gdb/gdbarch.sh b/gdb/gdbarch.sh
index c92a857..5b73301 100755
--- a/gdb/gdbarch.sh
+++ b/gdb/gdbarch.sh
@@ -976,6 +976,15 @@ m:void:iterate_over_objfiles_in_search_order:iterate_over_objfiles_in_search_ord
 
 # Ravenscar arch-dependent ops.
 v:struct ravenscar_arch_ops *:ravenscar_ops:::NULL:NULL::0:host_address_to_string (gdbarch->ravenscar_ops)
+
+# Return non-zero if the instruction at ADDR is a call; zero otherwise.
+M:int:insn_is_call:CORE_ADDR addr:addr::default_insn_is_call
+
+# Return non-zero if the instruction at ADDR is a return; zero otherwise.
+M:int:insn_is_ret:CORE_ADDR addr:addr::default_insn_is_ret
+
+# Return non-zero if the instruction at ADDR is a jump; zero otherwise.
+M:int:insn_is_jump:CORE_ADDR addr:addr::default_insn_is_jump
 EOF
 }
 
diff --git a/gdb/i386-tdep.c b/gdb/i386-tdep.c
index 930d6fc..694b58c 100644
--- a/gdb/i386-tdep.c
+++ b/gdb/i386-tdep.c
@@ -472,6 +472,22 @@ i386_absolute_jmp_p (const gdb_byte *insn)
   return 0;
 }
 
+/* Return non-zero if INSN is a jump; zero, otherwise.  */
+
+static int
+i386_jmp_p (const gdb_byte *insn)
+{
+  /* jump short, relative.  */
+  if (insn[0] == 0xeb)
+    return 1;
+
+  /* jump near, relative.  */
+  if (insn[0] == 0xe9)
+    return 1;
+
+  return i386_absolute_jmp_p (insn);
+}
+
 static int
 i386_absolute_call_p (const gdb_byte *insn)
 {
@@ -543,6 +559,45 @@ i386_syscall_p (const gdb_byte *insn, int *lengthp)
   return 0;
 }
 
+/* The gdbarch insn_is_call method.  */
+
+static int
+i386_insn_is_call (struct gdbarch *gdbarch, CORE_ADDR addr)
+{
+  gdb_byte buf[I386_MAX_INSN_LEN], *insn;
+
+  read_memory (addr, buf, I386_MAX_INSN_LEN);
+  insn = i386_skip_prefixes (buf, I386_MAX_INSN_LEN);
+
+  return i386_call_p (insn);
+}
+
+/* The gdbarch insn_is_ret method.  */
+
+static int
+i386_insn_is_ret (struct gdbarch *gdbarch, CORE_ADDR addr)
+{
+  gdb_byte buf[I386_MAX_INSN_LEN], *insn;
+
+  read_memory (addr, buf, I386_MAX_INSN_LEN);
+  insn = i386_skip_prefixes (buf, I386_MAX_INSN_LEN);
+
+  return i386_ret_p (insn);
+}
+
+/* The gdbarch insn_is_jump method.  */
+
+static int
+i386_insn_is_jump (struct gdbarch *gdbarch, CORE_ADDR addr)
+{
+  gdb_byte buf[I386_MAX_INSN_LEN], *insn;
+
+  read_memory (addr, buf, I386_MAX_INSN_LEN);
+  insn = i386_skip_prefixes (buf, I386_MAX_INSN_LEN);
+
+  return i386_jmp_p (insn);
+}
+
 /* Some kernels may run one past a syscall insn, so we have to cope.
    Otherwise this is just simple_displaced_step_copy_insn.  */
 
@@ -7774,6 +7829,10 @@ i386_gdbarch_init (struct gdbarch_info info, struct gdbarch_list *arches)
 
   set_gdbarch_gen_return_address (gdbarch, i386_gen_return_address);
 
+  set_gdbarch_insn_is_call (gdbarch, i386_insn_is_call);
+  set_gdbarch_insn_is_ret (gdbarch, i386_insn_is_ret);
+  set_gdbarch_insn_is_jump (gdbarch, i386_insn_is_jump);
+
   /* Hook in ABI-specific overrides, if they have been registered.  */
   info.tdep_info = (void *) tdesc_data;
   gdbarch_init_osabi (info, gdbarch);
-- 
1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 01/24] gdbarch: add instruction predicate methods
  2013-07-03  9:15 ` [patch v4 01/24] gdbarch: add instruction predicate methods Markus Metzger
@ 2013-07-03  9:49   ` Mark Kettenis
  2013-07-03 11:10     ` Metzger, Markus T
  2013-08-18 19:04   ` Jan Kratochvil
  1 sibling, 1 reply; 88+ messages in thread
From: Mark Kettenis @ 2013-07-03  9:49 UTC (permalink / raw)
  To: markus.t.metzger; +Cc: jan.kratochvil, gdb-patches

> 
> Add new methods to gdbarch for analyzing the instruction at a given address.
> Implement those methods for i386 and amd64 architectures.

This is all really amd64/i386-centric.  On a more abstract level, what
is the difference between "call", "ret" and "jump"?

> 2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>
> 
> 	* amd64-tdep.c (amd64_classify_insn_at, amd64_insn_is_call,
> 	amd64_insn_is_ret, amd64_insn_is_jump, amd64_jmp_p): New.
> 	(amd64_init_abi): Add insn_is_call, insn_is_ret, and insn_is_jump
> 	to gdbarch.
> 	* i386-tdep.c (i386_insn_is_call, i386_insn_is_ret,
> 	i386_insn_is_jump, i386_jmp_p): New.
> 	(i386_gdbarch_init): Add insn_is_call, insn_is_ret, and
> 	insn_is_jump to gdbarch.
> 	* gdbarch.sh (insn_is_call, insn_is_ret, insn_is_jump): New.
> 	* gdbarch.h: Regenerated.
> 	* gdbarch.c: Regenerated.
> 	* arch-utils.h (default_insn_is_call, default_insn_is_ret,
> 	default_insn_is_jump): New.
> 	* arch-utils.c (default_insn_is_call, default_insn_is_ret,
> 	default_insn_is_jump): New.
> 
> 
> ---
>  gdb/amd64-tdep.c |   67 ++++++++++++++++++++++++++++++++++
>  gdb/arch-utils.c |   15 ++++++++
>  gdb/arch-utils.h |    4 ++
>  gdb/gdbarch.c    |  105 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  gdb/gdbarch.h    |   24 ++++++++++++
>  gdb/gdbarch.sh   |    9 +++++
>  gdb/i386-tdep.c  |   59 ++++++++++++++++++++++++++++++
>  7 files changed, 283 insertions(+), 0 deletions(-)

^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [patch v4 01/24] gdbarch: add instruction predicate methods
  2013-07-03  9:49   ` Mark Kettenis
@ 2013-07-03 11:10     ` Metzger, Markus T
  0 siblings, 0 replies; 88+ messages in thread
From: Metzger, Markus T @ 2013-07-03 11:10 UTC (permalink / raw)
  To: Mark Kettenis; +Cc: jan.kratochvil, gdb-patches

> -----Original Message-----
> From: gdb-patches-owner@sourceware.org [mailto:gdb-patches-owner@sourceware.org] On Behalf Of Mark Kettenis
> Sent: Wednesday, July 03, 2013 11:49 AM

> > Add new methods to gdbarch for analyzing the instruction at a given address.
> > Implement those methods for i386 and amd64 architectures.
> 
> This is all really amd64/i386-centric.  On a more abstract level, what
> is the difference between "call", "ret" and "jump"?

Call is calling into a function, ret is returning from a function back to its caller,
and jump is an intra-function branch.

Call and return are language concepts so they should be OK.  Jump is already
a generalization (note the extra 'u'), but we may change this to 'goto' or some
other term if you like.

At the moment, I assume that there is a single instruction for each.  If it turns
out that there are architectures that do this in more than one instruction, we
will need to extend the algorithm in record-btrace.c.  At the moment, the only
architecture that supports branch tracing is x86 so this suffices.

Regards,
Markus.
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 01/24] gdbarch: add instruction predicate methods
  2013-07-03  9:15 ` [patch v4 01/24] gdbarch: add instruction predicate methods Markus Metzger
  2013-07-03  9:49   ` Mark Kettenis
@ 2013-08-18 19:04   ` Jan Kratochvil
  1 sibling, 0 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:04 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches

On Wed, 03 Jul 2013 11:14:11 +0200, Markus Metzger wrote:
[...]
> diff --git a/gdb/gdbarch.sh b/gdb/gdbarch.sh
> index c92a857..5b73301 100755
> --- a/gdb/gdbarch.sh
> +++ b/gdb/gdbarch.sh
> @@ -976,6 +976,15 @@ m:void:iterate_over_objfiles_in_search_order:iterate_over_objfiles_in_search_ord
>  
>  # Ravenscar arch-dependent ops.
>  v:struct ravenscar_arch_ops *:ravenscar_ops:::NULL:NULL::0:host_address_to_string (gdbarch->ravenscar_ops)
> +
> +# Return non-zero if the instruction at ADDR is a call; zero otherwise.
> +M:int:insn_is_call:CORE_ADDR addr:addr::default_insn_is_call
> +
> +# Return non-zero if the instruction at ADDR is a return; zero otherwise.
> +M:int:insn_is_ret:CORE_ADDR addr:addr::default_insn_is_ret
> +
> +# Return non-zero if the instruction at ADDR is a jump; zero otherwise.
> +M:int:insn_is_jump:CORE_ADDR addr:addr::default_insn_is_jump

As you no longer use the gdbarch_METHODNAME_p checks if the method is
implemented on that gdbarch you can change the initial 'M' to 'm' so that
these unused *_p methods are no longer generated.


>  EOF
>  }
>  
[...]

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 00/24] record-btrace: reverse
  2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
                   ` (23 preceding siblings ...)
  2013-07-03  9:15 ` [patch v4 21/24] record-btrace: show trace from enable location Markus Metzger
@ 2013-08-18 19:04 ` Jan Kratochvil
  24 siblings, 0 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:04 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches

Hi Markus,

sorry for the late review.  There are some remaining questions in this reply
so it is not yet an approval of the whole series.


Thanks,
Jan

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 03/24] btrace: change branch trace data structure
  2013-07-03  9:14 ` [patch v4 03/24] btrace: change branch trace data structure Markus Metzger
@ 2013-08-18 19:05   ` Jan Kratochvil
  2013-09-10  9:11     ` Metzger, Markus T
  0 siblings, 1 reply; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:05 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches, Christian Himpel

On Wed, 03 Jul 2013 11:14:13 +0200, Markus Metzger wrote:
> The branch trace is represented as 3 vectors:
>   - a block vector
>   - a instruction vector
>   - a function vector
> 
> Each vector (except for the first) is computed from the one above.
> 
> Change this into a graph where a node represents a sequence of instructions
> belonging to the same function and where we have three types of edges to connect
> the function segments:
>   - control flow
>   - same function (instance)
>   - call stack
> 
> This allows us to navigate in the branch trace.  We will need this for "record
> goto" and reverse execution.
> 
> This patch introduces the data structure and computes the control flow edges.
> It also introduces iterator structs to simplify iterating over the branch trace
> in control-flow order.
> 
> It also fixes PR gdb/15240 since now recursive calls are handled correctly.
> Fix the test that got the number of expected fib instances and also the
> function numbers wrong.
> 
> The current instruction had been part of the branch trace.  This will look odd
> once we start support for reverse execution.  Remove it.  We still keep it in
> the trace itself to allow extending the branch trace more easily in the future.
> 
> CC: Christian Himpel <christian.himpel@intel.com>
> 2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>
> 
> 	* btrace.h (struct btrace_func_link): New.
> 	(enum btrace_function_flag): New.
> 	(struct btrace_inst): Rename to ...
> 	(struct btrace_insn): ...this. Update all users.
> 	(struct btrace_func) <ibegin, iend>: Remove.
> 	(struct btrace_func_link): New.
> 	(struct btrace_func): Rename to ...
> 	(struct btrace_function): ...this. Update all users.
> 	(struct btrace_function) <segment, flow, up, insn, insn_offset,
> 	number, level, flags>: New.
> 	(struct btrace_insn_iterator): Rename to ...
> 	(struct btrace_insn_history): ...this.
> 	Update all users.
> 	(struct btrace_insn_iterator, btrace_call_iterator): New.
> 	(struct btrace_target_info) <btrace, itrace, ftrace>: Remove.
> 	(struct btrace_target_info) <begin, end, level,
> 	insn_history, call_history>: New.
> 	(btrace_insn_get, btrace_insn_number, btrace_insn_begin,
> 	btrace_insn_end, btrace_insn_prev, btrace_insn_next,
> 	btrace_insn_cmp, btrace_find_insn_by_number, btrace_call_get,
> 	btrace_call_number, btrace_call_begin, btrace_call_end,
> 	btrace_call_prev, btrace_call_next, btrace_call_cmp,
> 	btrace_find_function_by_number, btrace_set_insn_history,
> 	btrace_set_call_history): New.
> 	* btrace.c (btrace_init_insn_iterator,
> 	btrace_init_func_iterator, compute_itrace): Remove.
> 	(ftrace_print_function_name, ftrace_print_filename,
> 	ftrace_skip_file): Change
> 	parameter to const.
> 	(ftrace_init_func): Remove.
> 	(ftrace_debug): Use new btrace_function fields.
> 	(ftrace_function_switched): Also consider gaining and
> 	losing symbol information).
> 	(ftrace_print_insn_addr, ftrace_new_call, ftrace_new_return,
> 	ftrace_new_switch, ftrace_find_caller, ftrace_new_function,
> 	ftrace_update_caller, ftrace_fixup_caller, ftrace_new_tailcall):
> 	New.
> 	(ftrace_new_function): Move. Remove debug print.
> 	(ftrace_update_lines, ftrace_update_insns): New.
> 	(ftrace_update_function): Check for call, ret, and jump.
> 	(compute_ftrace): Renamed to ...
> 	(btrace_compute_ftrace): ...this. Rewritten to compute call
> 	stack.
> 	(btrace_fetch, btrace_clear): Updated.
> 	(btrace_insn_get, btrace_insn_number, btrace_insn_begin,
> 	btrace_insn_end, btrace_insn_prev, btrace_insn_next,
> 	btrace_insn_cmp, btrace_find_insn_by_number, btrace_call_get,
> 	btrace_call_number, btrace_call_begin, btrace_call_end,
> 	btrace_call_prev, btrace_call_next, btrace_call_cmp,
> 	btrace_find_function_by_number, btrace_set_insn_history,
> 	btrace_set_call_history): New.
> 	* record-btrace.c (require_btrace): Use new btrace thread
> 	info fields.
> 	(record_btrace_info, btrace_insn_history,
> 	record_btrace_insn_history, record_btrace_insn_history_range):
> 	Use new btrace thread info fields and new iterator.
> 	(btrace_func_history_src_line): Rename to ...
> 	(btrace_call_history_src_line): ...this. Use new btrace
> 	thread info fields.
> 	(btrace_func_history): Rename to ...
> 	(btrace_call_history): ...this. Use new btrace thread info
> 	fields and new iterator.
> 	(record_btrace_call_history, record_btrace_call_history_range):
> 	Use new btrace thread info fields and new iterator.
> 
> testsuite/
> 	* gdb.btrace/function_call_history.exp: Fix expected function
> 	trace.
> 
> 
> ---
>  gdb/btrace.c                                       | 1186 +++++++++++++++++---
>  gdb/btrace.h                                       |  230 ++++-
>  gdb/record-btrace.c                                |  342 +++---
>  gdb/testsuite/gdb.btrace/function_call_history.exp |   28 +-
>  gdb/testsuite/gdb.btrace/instruction_history.exp   |   12 +-
>  5 files changed, 1405 insertions(+), 393 deletions(-)
> 
> diff --git a/gdb/btrace.c b/gdb/btrace.c
> index 3230a3e..53549db 100644
> --- a/gdb/btrace.c
> +++ b/gdb/btrace.c
> @@ -45,92 +45,11 @@
>  
>  #define DEBUG_FTRACE(msg, args...) DEBUG ("[ftrace] " msg, ##args)
>  
> -/* Initialize the instruction iterator.  */
> -
> -static void
> -btrace_init_insn_iterator (struct btrace_thread_info *btinfo)
> -{
> -  DEBUG ("init insn iterator");
> -
> -  btinfo->insn_iterator.begin = 1;
> -  btinfo->insn_iterator.end = 0;
> -}
> -
> -/* Initialize the function iterator.  */
> -
> -static void
> -btrace_init_func_iterator (struct btrace_thread_info *btinfo)
> -{
> -  DEBUG ("init func iterator");
> -
> -  btinfo->func_iterator.begin = 1;
> -  btinfo->func_iterator.end = 0;
> -}
> -
> -/* Compute the instruction trace from the block trace.  */
> -
> -static VEC (btrace_inst_s) *
> -compute_itrace (VEC (btrace_block_s) *btrace)
> -{
> -  VEC (btrace_inst_s) *itrace;
> -  struct gdbarch *gdbarch;
> -  unsigned int b;
> -
> -  DEBUG ("compute itrace");
> -
> -  itrace = NULL;
> -  gdbarch = target_gdbarch ();
> -  b = VEC_length (btrace_block_s, btrace);
> -
> -  while (b-- != 0)
> -    {
> -      btrace_block_s *block;
> -      CORE_ADDR pc;
> -
> -      block = VEC_index (btrace_block_s, btrace, b);
> -      pc = block->begin;
> -
> -      /* Add instructions for this block.  */
> -      for (;;)
> -	{
> -	  btrace_inst_s *inst;
> -	  int size;
> -
> -	  /* We should hit the end of the block.  Warn if we went too far.  */
> -	  if (block->end < pc)
> -	    {
> -	      warning (_("Recorded trace may be corrupted."));
> -	      break;
> -	    }
> -
> -	  inst = VEC_safe_push (btrace_inst_s, itrace, NULL);
> -	  inst->pc = pc;
> -
> -	  /* We're done once we pushed the instruction at the end.  */
> -	  if (block->end == pc)
> -	    break;
> -
> -	  size = gdb_insn_length (gdbarch, pc);
> -
> -	  /* Make sure we terminate if we fail to compute the size.  */
> -	  if (size <= 0)
> -	    {
> -	      warning (_("Recorded trace may be incomplete."));
> -	      break;
> -	    }
> -
> -	  pc += size;
> -	}
> -    }
> -
> -  return itrace;
> -}
> -
>  /* Return the function name of a recorded function segment for printing.
>     This function never returns NULL.  */
>  
>  static const char *
> -ftrace_print_function_name (struct btrace_func *bfun)
> +ftrace_print_function_name (const struct btrace_function *bfun)
>  {
>    struct minimal_symbol *msym;
>    struct symbol *sym;
> @@ -151,7 +70,7 @@ ftrace_print_function_name (struct btrace_func *bfun)
>     This function never returns NULL.  */
>  
>  static const char *
> -ftrace_print_filename (struct btrace_func *bfun)
> +ftrace_print_filename (const struct btrace_function *bfun)
>  {
>    struct symbol *sym;
>    const char *filename;
> @@ -166,44 +85,53 @@ ftrace_print_filename (struct btrace_func *bfun)
>    return filename;
>  }
>  
> -/* Print an ftrace debug status message.  */
> +/* Print the address of an instruction.

It does not "print it", it "Return string representation of address of an
instruction.".


> +   This function never returns NULL.  */
>  
> -static void
> -ftrace_debug (struct btrace_func *bfun, const char *prefix)
> +static const char *
> +ftrace_print_insn_addr (const struct btrace_insn *insn)
>  {
> -  DEBUG_FTRACE ("%s: fun = %s, file = %s, lines = [%d; %d], insn = [%u; %u]",
> -		prefix, ftrace_print_function_name (bfun),
> -		ftrace_print_filename (bfun), bfun->lbegin, bfun->lend,
> -		bfun->ibegin, bfun->iend);
> +  if (insn == NULL)
> +    return "<nil>";
> +
> +  return core_addr_to_string_nz (insn->pc);
>  }
>  
> -/* Initialize a recorded function segment.  */
> +/* Print an ftrace debug status message.  */
>  
>  static void
> -ftrace_init_func (struct btrace_func *bfun, struct minimal_symbol *mfun,
> -		  struct symbol *fun, unsigned int idx)
> +ftrace_debug (const struct btrace_function *bfun, const char *prefix)
>  {
> -  bfun->msym = mfun;
> -  bfun->sym = fun;
> -  bfun->lbegin = INT_MAX;
> -  bfun->lend = 0;
> -  bfun->ibegin = idx;
> -  bfun->iend = idx;
> +  const char *fun, *file;
> +  unsigned int ibegin, iend;
> +  int lbegin, lend, level;
> +
> +  fun = ftrace_print_function_name (bfun);
> +  file = ftrace_print_filename (bfun);
> +  level = bfun->level;
> +
> +  lbegin = bfun->lbegin;
> +  lend = bfun->lend;
> +
> +  ibegin = bfun->insn_offset;
> +  iend = ibegin + VEC_length (btrace_insn_s, bfun->insn);
> +
> +  DEBUG_FTRACE ("%s: fun = %s, file = %s, level = %d, lines = [%d; %d], "
> +		"insn = [%u; %u)", prefix, fun, file, level, lbegin, lend,
> +		ibegin, iend);
>  }
>  
> -/* Check whether the function has changed.  */
> +/* Return non-zero if BFUN does not match MFUN and FUN;
> +   return zero, otherwise.  */
>  
>  static int
> -ftrace_function_switched (struct btrace_func *bfun,
> -			  struct minimal_symbol *mfun, struct symbol *fun)
> +ftrace_function_switched (const struct btrace_function *bfun,
> +			  const struct minimal_symbol *mfun,
> +			  const struct symbol *fun)
>  {
>    struct minimal_symbol *msym;
>    struct symbol *sym;
>  
> -  /* The function changed if we did not have one before.  */
> -  if (bfun == NULL)
> -    return 1;
> -
>    msym = bfun->msym;
>    sym = bfun->sym;
>  
> @@ -228,15 +156,24 @@ ftrace_function_switched (struct btrace_func *bfun,
>  	return 1;
>      }
>  
> +  /* If we lost symbol information, we switched functions.  */
> +  if (!(msym == NULL && sym == NULL) && mfun == NULL && fun == NULL)
> +    return 1;
> +
> +  /* If we gained symbol information, we switched functions.  */
> +  if (msym == NULL && sym == NULL && !(mfun == NULL && fun == NULL))
> +    return 1;
> +
>    return 0;
>  }
>  
> -/* Check if we should skip this file when generating the function call
> -   history.  We would want to do that if, say, a macro that is defined
> -   in another file is expanded in this function.  */
> +/* Return non-zero if we should skip this file when generating the function
> +   call history; zero, otherwise.
> +   We would want to do that if, say, a macro that is defined in another file
> +   is expanded in this function.  */
>  
>  static int
> -ftrace_skip_file (struct btrace_func *bfun, const char *filename)
> +ftrace_skip_file (const struct btrace_function *bfun, const char *fullname)
>  {
>    struct symbol *sym;
>    const char *bfile;
> @@ -248,89 +185,477 @@ ftrace_skip_file (struct btrace_func *bfun, const char *filename)
>    else
>      bfile = "";
>  
> -  if (filename == NULL)
> -    filename = "";
> +  if (fullname == NULL)
> +    fullname = "";

The code should not assume FULLNAME cannot be "", "" is theoretically a valid
source file filename.

Second reason is that currently no caller of ftrace_skip_file will pass NULL
as the second parameter.

So the function can be just:

  if (sym == NULL)
    return 1;

  bfile = symtab_to_fullname (sym->symtab);

  return filename_cmp (bfile, fullname) != 0;

And the function has only one caller so it would be IMO easier to read to get
it inlined.

And not sure if it matters much but doing two symtab_to_fullname for
comparison of two symtabs equality is needlessly expensive
- symtab_to_fullname is very expensive.  There are several places in GDB
doing first:
          /* Before we invoke realpath, which can get expensive when many
             files are involved, do a quick comparison of the basenames.  */
          if (!basenames_may_differ
              && filename_cmp (lbasename (symtab1->filename),
                               lbasename (symtab2->filename)) != 0)
            continue;


>  
> -  return (filename_cmp (bfile, filename) != 0);
> +  return (filename_cmp (bfile, fullname) != 0);
>  }
>  
> -/* Compute the function trace from the instruction trace.  */
> +/* Allocate and initialize a new branch trace function segment.
> +   PREV is the chronologically preceding function segment.
> +   MFUN and FUN are the symbol information we have for this function.  */
>  
> -static VEC (btrace_func_s) *
> -compute_ftrace (VEC (btrace_inst_s) *itrace)
> +static struct btrace_function *
> +ftrace_new_function (struct btrace_function *prev,
> +		     struct minimal_symbol *mfun,
> +		     struct symbol *fun)
>  {
> -  VEC (btrace_func_s) *ftrace;
> -  struct btrace_inst *binst;
> -  struct btrace_func *bfun;
> -  unsigned int idx;
> +  struct btrace_function *bfun;
>  
> -  DEBUG ("compute ftrace");
> +  bfun = xzalloc (sizeof (*bfun));
> +
> +  bfun->msym = mfun;
> +  bfun->sym = fun;
> +  bfun->flow.prev = prev;
> +
> +  /* We start with the identities of min and max, respectively.  */
> +  bfun->lbegin = INT_MAX;
> +  bfun->lend = INT_MIN;
> +
> +  if (prev != NULL)
> +    {
> +      gdb_assert (prev->flow.next == NULL);
> +      prev->flow.next = bfun;
> +
> +      bfun->number = prev->number + 1;
> +      bfun->insn_offset = (prev->insn_offset
> +			   + VEC_length (btrace_insn_s, prev->insn));
> +    }
> +
> +  return bfun;
> +}
> +
> +/* Update the UP field of a function segment.  */
>  
> -  ftrace = NULL;
> -  bfun = NULL;
> +static void
> +ftrace_update_caller (struct btrace_function *bfun,
> +		      struct btrace_function *caller,
> +		      unsigned int flags)

FLAGS should be enum btrace_function_flag (it is ORed bitmask but GDB displays
enum ORed bitmasks appropriately).


> +{
> +  if (bfun->up != NULL)
> +    ftrace_debug (bfun, "updating caller");
> +
> +  bfun->up = caller;
> +  bfun->flags = flags;
> +
> +  ftrace_debug (bfun, "set caller");
> +}
> +
> +/* Fix up the caller for a function segment.  */

IIUC it should be:

/* Fix up the caller for all segments of a function call.  */


>  
> -  for (idx = 0; VEC_iterate (btrace_inst_s, itrace, idx, binst); ++idx)
> +static void
> +ftrace_fixup_caller (struct btrace_function *bfun,
> +		     struct btrace_function *caller,
> +		     unsigned int flags)

FLAGS should be enum btrace_function_flag (it is ORed bitmask but GDB displays
enum ORed bitmasks appropriately).


> +{
> +  struct btrace_function *prev, *next;
> +
> +  ftrace_update_caller (bfun, caller, flags);
> +
> +  /* Update all function segments belonging to the same function.  */
> +  for (prev = bfun->segment.prev; prev != NULL; prev = prev->segment.prev)
> +    ftrace_update_caller (prev, caller, flags);
> +
> +  for (next = bfun->segment.next; next != NULL; next = next->segment.next)
> +    ftrace_update_caller (next, caller, flags);
> +}
> +
> +/* Add a new function segment for a call.
> +   CALLER is the chronologically preceding function segment.
> +   MFUN and FUN are the symbol information we have for this function.  */
> +
> +static struct btrace_function *
> +ftrace_new_call (struct btrace_function *caller,
> +		 struct minimal_symbol *mfun,
> +		 struct symbol *fun)
> +{
> +  struct btrace_function *bfun;
> +
> +  bfun = ftrace_new_function (caller, mfun, fun);
> +  bfun->up = caller;
> +  bfun->level = caller->level + 1;
> +
> +  ftrace_debug (bfun, "new call");
> +
> +  return bfun;
> +}
> +
> +/* Add a new function segment for a tail call.
> +   CALLER is the chronologically preceding function segment.
> +   MFUN and FUN are the symbol information we have for this function.  */
> +
> +static struct btrace_function *
> +ftrace_new_tailcall (struct btrace_function *caller,
> +		     struct minimal_symbol *mfun,
> +		     struct symbol *fun)
> +{
> +  struct btrace_function *bfun;
> +
> +  bfun = ftrace_new_function (caller, mfun, fun);
> +  bfun->up = caller;
> +  bfun->level = caller->level + 1;
> +  bfun->flags |= BFUN_UP_LINKS_TO_TAILCALL;
> +
> +  ftrace_debug (bfun, "new tail call");
> +
> +  return bfun;
> +}
> +
> +/* Find the innermost caller in the back trace of BFUN with MFUN/FUN
> +   symbol information.  */
> +
> +static struct btrace_function *
> +ftrace_find_caller (struct btrace_function *bfun,
> +		    struct minimal_symbol *mfun,
> +		    struct symbol *fun)
> +{
> +  for (; bfun != NULL; bfun = bfun->up)
>      {
> -      struct symtab_and_line sal;
> -      struct bound_minimal_symbol mfun;
> -      struct symbol *fun;
> -      const char *filename;
> +      /* Skip functions with incompatible symbol information.  */
> +      if (ftrace_function_switched (bfun, mfun, fun))
> +	continue;
> +
> +      /* This is the function segment we're looking for.  */
> +      break;
> +    }
> +
> +  return bfun;
> +}
> +
> +/* Find the innermost caller in the back trace of BFUN, skipping all
> +   function segments that do not end with a call instruction (e.g.
> +   tail calls ending with a jump).  */
> +
> +static struct btrace_function *
> +ftrace_find_call (struct gdbarch *gdbarch, struct btrace_function *bfun)
> +{
> +  for (; bfun != NULL; bfun = bfun->up)
> +    {
> +      struct btrace_insn *last;
>        CORE_ADDR pc;
>  
> -      pc = binst->pc;
> +      /* We do not allow empty function segments.  */
> +      gdb_assert (!VEC_empty (btrace_insn_s, bfun->insn));
>  
> -      /* Try to determine the function we're in.  We use both types of symbols
> -	 to avoid surprises when we sometimes get a full symbol and sometimes
> -	 only a minimal symbol.  */
> -      fun = find_pc_function (pc);
> -      mfun = lookup_minimal_symbol_by_pc (pc);
> +      last = VEC_last (btrace_insn_s, bfun->insn);
> +      pc = last->pc;
> +
> +      if (gdbarch_insn_is_call (gdbarch, pc))
> +	break;
> +    }
> +
> +  return bfun;
> +}
> +
> +/* Add a new function segment for a return.

/* Add a continuation segment for a function into which we return.

(It was ambiguous for newcomers, like it could mean to create a last segment
for function from which we return.)


> +   PREV is the chronologically preceding function segment.
> +   MFUN and FUN are the symbol information we have for this function.  */
> +
> +static struct btrace_function *
> +ftrace_new_return (struct gdbarch *gdbarch,
> +		   struct btrace_function *prev,
> +		   struct minimal_symbol *mfun,
> +		   struct symbol *fun)
> +{
> +  struct btrace_function *bfun, *caller;
>  
> -      if (fun == NULL && mfun.minsym == NULL)
> +  bfun = ftrace_new_function (prev, mfun, fun);
> +
> +  /* It is important to start at PREV's caller.  Otherwise, we might find
> +     PREV itself, if PREV is a recursive function.  */
> +  caller = ftrace_find_caller (prev->up, mfun, fun);
> +  if (caller != NULL)
> +    {
> +      /* The caller of PREV is the preceding btrace function segment in this
> +	 function instance.  */
> +      gdb_assert (caller->segment.next == NULL);
> +
> +      caller->segment.next = bfun;
> +      bfun->segment.prev = caller;
> +
> +      /* Maintain the function level.  */
> +      bfun->level = caller->level;
> +
> +      /* Maintain the call stack.  */
> +      bfun->up = caller->up;
> +      bfun->flags = caller->flags;
> +
> +      ftrace_debug (bfun, "new return");
> +    }
> +  else
> +    {
> +      /* We did not find a caller.  This could mean that something went
> +	 wrong or that the call is simply not included in the trace.  */
> +
> +      /* Let's search for some actual call.  */
> +      caller = ftrace_find_call (gdbarch, prev->up);
> +      if (caller == NULL)
>  	{
> -	  DEBUG_FTRACE ("no symbol at %u, pc=%s", idx,
> -			core_addr_to_string_nz (pc));
> -	  continue;
> -	}
> +	  /* There is no call in PREV's back trace.  We assume that the
> +	     branch trace did not include it.  */
> +
> +	  /* Let's find the topmost call function - this skips tail calls.  */
> +	  while (prev->up != NULL)
> +	    prev = prev->up;
> +
> +	  /* We maintain levels for a series of returns for which we have
> +	     not seen the calls, but we restart at level 0, otherwise.  */
> +	  bfun->level = min (0, prev->level) - 1;

Why is there the 'min (0, ' part?


> +
> +	  /* Fix up the call stack for PREV.  */
> +	  ftrace_fixup_caller (prev, bfun, BFUN_UP_LINKS_TO_RET);
>  
> -      /* If we're switching functions, we start over.  */
> -      if (ftrace_function_switched (bfun, mfun.minsym, fun))
> +	  ftrace_debug (bfun, "new return - no caller");
> +	}
> +      else
>  	{
> -	  bfun = VEC_safe_push (btrace_func_s, ftrace, NULL);
> +	  /* There is a call in PREV's back trace to which we should have
> +	     returned.  Let's remain at this level.  */
> +	  bfun->level = prev->level;

Shouldn't here be rather:
	  bfun->level = caller->level;


>  
> -	  ftrace_init_func (bfun, mfun.minsym, fun, idx);
> -	  ftrace_debug (bfun, "init");
> +	  ftrace_debug (bfun, "new return - unknown caller");
>  	}
> +    }
> +
> +  return bfun;
> +}
> +
> +/* Add a new function segment for a function switch.
> +   PREV is the chronologically preceding function segment.
> +   MFUN and FUN are the symbol information we have for this function.  */
> +
> +static struct btrace_function *
> +ftrace_new_switch (struct btrace_function *prev,
> +		   struct minimal_symbol *mfun,
> +		   struct symbol *fun)
> +{
> +  struct btrace_function *bfun;
> +
> +  /* This is an unexplained function switch.  The call stack will likely
> +     be wrong at this point.  */
> +  bfun = ftrace_new_function (prev, mfun, fun);
>  
> -      /* Update the instruction range.  */
> -      bfun->iend = idx;
> -      ftrace_debug (bfun, "update insns");
> +  /* We keep the function level.  */
> +  bfun->level = prev->level;
> +
> +  ftrace_debug (bfun, "new switch");
> +
> +  return bfun;
> +}
> +
> +/* Update BFUN with respect to the instruction at PC.  This may create new
> +   function segments.
> +   Return the chronologically latest function segment, never NULL.  */
> +
> +static struct btrace_function *
> +ftrace_update_function (struct gdbarch *gdbarch,
> +			struct btrace_function *bfun, CORE_ADDR pc)
> +{
> +  struct bound_minimal_symbol bmfun;
> +  struct minimal_symbol *mfun;
> +  struct symbol *fun;
> +  struct btrace_insn *last;
> +
> +  /* Try to determine the function we're in.  We use both types of symbols
> +     to avoid surprises when we sometimes get a full symbol and sometimes
> +     only a minimal symbol.  */
> +  fun = find_pc_function (pc);
> +  bmfun = lookup_minimal_symbol_by_pc (pc);
> +  mfun = bmfun.minsym;
> +
> +  if (fun == NULL && mfun == NULL)
> +    DEBUG_FTRACE ("no symbol at %s", core_addr_to_string_nz (pc));
> +
> +  /* If we didn't have a function before, we create one.  */
> +  if (bfun == NULL)
> +    return ftrace_new_function (bfun, mfun, fun);
>  
> -      /* Let's see if we have source correlation, as well.  */
> -      sal = find_pc_line (pc, 0);
> -      if (sal.symtab == NULL || sal.line == 0)
> +  /* Check the last instruction, if we have one.
> +     We do this check first, since it allows us to fill in the call stack
> +     links in addition to the normal flow links.  */
> +  last = NULL;
> +  if (!VEC_empty (btrace_insn_s, bfun->insn))
> +    last = VEC_last (btrace_insn_s, bfun->insn);
> +
> +  if (last != NULL)
> +    {
> +      CORE_ADDR lpc;
> +
> +      lpc = last->pc;
> +
> +      /* Check for returns.  */
> +      if (gdbarch_insn_is_ret (gdbarch, lpc))
> +	return ftrace_new_return (gdbarch, bfun, mfun, fun);
> +
> +      /* Check for calls.  */
> +      if (gdbarch_insn_is_call (gdbarch, lpc))
>  	{
> -	  DEBUG_FTRACE ("no lines at %u, pc=%s", idx,
> -			core_addr_to_string_nz (pc));
> -	  continue;
> +	  int size;
> +
> +	  size = gdb_insn_length (gdbarch, lpc);
> +
> +	  /* Ignore calls to the next instruction.  They are used for PIC.  */
> +	  if (lpc + size != pc)
> +	    return ftrace_new_call (bfun, mfun, fun);
>  	}
> +    }
> +
> +  /* Check if we're switching functions for some other reason.  */
> +  if (ftrace_function_switched (bfun, mfun, fun))
> +    {
> +      DEBUG_FTRACE ("switching from %s in %s at %s",
> +		    ftrace_print_insn_addr (last),
> +		    ftrace_print_function_name (bfun),
> +		    ftrace_print_filename (bfun));
>  
> -      /* Check if we switched files.  This could happen if, say, a macro that
> -	 is defined in another file is expanded here.  */
> -      filename = symtab_to_fullname (sal.symtab);
> -      if (ftrace_skip_file (bfun, filename))
> +      if (last != NULL)
>  	{
> -	  DEBUG_FTRACE ("ignoring file at %u, pc=%s, file=%s", idx,
> -			core_addr_to_string_nz (pc), filename);
> -	  continue;
> +	  CORE_ADDR start, lpc;
> +
> +	  /* If we have symbol information for our current location, use
> +	     it to check that we jump to the start of a function.  */
> +	  if (fun != NULL || mfun != NULL)
> +	    start = get_pc_function_start (pc);
> +	  else
> +	    start = pc;

This goes into implementation detail of get_pc_function_start.  Rather always
call get_pc_function_start but one should check if it failed in all cases
(you do not check if get_pc_function_start failed).  get_pc_function_start
returns 0 if it has failed.

Or was the 'fun != NULL || mfun != NULL' check there for performance reasons?


> +
> +	  lpc = last->pc;
> +
> +	  /* Jumps indicate optimized tail calls.  */
> +	  if (start == pc && gdbarch_insn_is_jump (gdbarch, lpc))
> +	    return ftrace_new_tailcall (bfun, mfun, fun);
>  	}
>  
> -      /* Update the line range.  */
> -      bfun->lbegin = min (bfun->lbegin, sal.line);
> -      bfun->lend = max (bfun->lend, sal.line);
> -      ftrace_debug (bfun, "update lines");
> +      return ftrace_new_switch (bfun, mfun, fun);
> +    }
> +
> +  return bfun;
> +}
> +
> +/* Update BFUN's source correlation with respect to the instruction at PC.  */

s/correlation/range/ ?


> +
> +static void
> +ftrace_update_lines (struct btrace_function *bfun, CORE_ADDR pc)
> +{
> +  struct symtab_and_line sal;
> +  const char *fullname;
> +
> +  sal = find_pc_line (pc, 0);
> +  if (sal.symtab == NULL || sal.line == 0)
> +    {
> +      DEBUG_FTRACE ("no lines at %s", core_addr_to_string_nz (pc));
> +      return;
> +    }
> +
> +  /* Check if we switched files.  This could happen if, say, a macro that
> +     is defined in another file is expanded here.  */
> +  fullname = symtab_to_fullname (sal.symtab);
> +  if (ftrace_skip_file (bfun, fullname))
> +    {
> +      DEBUG_FTRACE ("ignoring file at %s, file=%s",
> +		    core_addr_to_string_nz (pc), fullname);
> +      return;
> +    }
> +
> +  /* Update the line range.  */
> +  bfun->lbegin = min (bfun->lbegin, sal.line);
> +  bfun->lend = max (bfun->lend, sal.line);
> +
> +  if (record_debug > 1)
> +    ftrace_debug (bfun, "update lines");
> +}
> +
> +/* Add the instruction at PC to BFUN's instructions.  */
> +
> +static void
> +ftrace_update_insns (struct btrace_function *bfun, CORE_ADDR pc)
> +{
> +  struct btrace_insn *insn;
> +
> +  insn = VEC_safe_push (btrace_insn_s, bfun->insn, NULL);
> +  insn->pc = pc;
> +
> +  if (record_debug > 1)
> +    ftrace_debug (bfun, "update insn");
> +}
> +
> +/* Compute the function branch trace from a block branch trace BTRACE for
> +   a thread given by BTINFO.  */
> +
> +static void
> +btrace_compute_ftrace (struct btrace_thread_info *btinfo,
> +		       VEC (btrace_block_s) *btrace)

When doing any non-trivial trace on buggy Nehalem (enabling btrace by a GDB
patch) GDB locks up on "info record".  I found it is looping in this function
with too big btrace range:
(gdb) p *block
$5 = {begin = 4777824, end = 9153192}

But one can break it easily with CTRL-C and hopefully on btrace-correct CPUs
such things do not happen.


> +{
> +  struct btrace_function *begin, *end;
> +  struct gdbarch *gdbarch;
> +  unsigned int blk;
> +  int level;
> +
> +  DEBUG ("compute ftrace");
> +
> +  gdbarch = target_gdbarch ();
> +  begin = NULL;
> +  end = NULL;
> +  level = INT_MAX;
> +  blk = VEC_length (btrace_block_s, btrace);
> +
> +  while (blk != 0)
> +    {
> +      btrace_block_s *block;
> +      CORE_ADDR pc;
> +
> +      blk -= 1;
> +
> +      block = VEC_index (btrace_block_s, btrace, blk);
> +      pc = block->begin;
> +
> +      for (;;)
> +	{
> +	  int size;
> +
> +	  /* We should hit the end of the block.  Warn if we went too far.  */
> +	  if (block->end < pc)
> +	    {
> +	      warning (_("Recorded trace may be corrupted around %s."),
> +		       core_addr_to_string_nz (pc));
> +	      break;
> +	    }
> +
> +	  end = ftrace_update_function (gdbarch, end, pc);
> +	  if (begin == NULL)
> +	    begin = end;
> +
> +	  /* Maintain the function level offset.  */
> +	  level = min (level, end->level);
> +
> +	  ftrace_update_insns (end, pc);
> +	  ftrace_update_lines (end, pc);
> +
> +	  /* We're done once we pushed the instruction at the end.  */
> +	  if (block->end == pc)
> +	    break;
> +
> +	  size = gdb_insn_length (gdbarch, pc);
> +
> +	  /* Make sure we terminate if we fail to compute the size.  */
> +	  if (size <= 0)
> +	    {
> +	      warning (_("Recorded trace may be incomplete around %s."),
> +		       core_addr_to_string_nz (pc));
> +	      break;
> +	    }
> +
> +	  pc += size;
> +	}
>      }
>  
> -  return ftrace;
> +  btinfo->begin = begin;
> +  btinfo->end = end;
> +
> +  /* LEVEL is the minimal function level of all btrace function segments.
> +     Define the global level offset to -LEVEL so all function levels are
> +     normalized to start at zero.  */
> +  btinfo->level = -level;
>  }
>  
>  /* See btrace.h.  */
> @@ -394,6 +719,7 @@ btrace_fetch (struct thread_info *tp)
>  {
>    struct btrace_thread_info *btinfo;
>    VEC (btrace_block_s) *btrace;
> +  struct cleanup *cleanup;
>  
>    DEBUG ("fetch thread %d (%s)", tp->num, target_pid_to_str (tp->ptid));
>  
> @@ -402,18 +728,15 @@ btrace_fetch (struct thread_info *tp)
>      return;
>  
>    btrace = target_read_btrace (btinfo->target, btrace_read_new);
> -  if (VEC_empty (btrace_block_s, btrace))
> -    return;
> -
> -  btrace_clear (tp);
> +  cleanup = make_cleanup (VEC_cleanup (btrace_block_s), &btrace);
>  
> -  btinfo->btrace = btrace;
> -  btinfo->itrace = compute_itrace (btinfo->btrace);
> -  btinfo->ftrace = compute_ftrace (btinfo->itrace);
> +  if (!VEC_empty (btrace_block_s, btrace))
> +    {
> +      btrace_clear (tp);
> +      btrace_compute_ftrace (btinfo, btrace);
> +    }
>  
> -  /* Initialize branch trace iterators.  */
> -  btrace_init_insn_iterator (btinfo);
> -  btrace_init_func_iterator (btinfo);
> +  do_cleanups (cleanup);
>  }
>  
>  /* See btrace.h.  */
> @@ -422,18 +745,29 @@ void
>  btrace_clear (struct thread_info *tp)
>  {
>    struct btrace_thread_info *btinfo;
> +  struct btrace_function *it, *trash;
>  
>    DEBUG ("clear thread %d (%s)", tp->num, target_pid_to_str (tp->ptid));
>  
>    btinfo = &tp->btrace;
>  
> -  VEC_free (btrace_block_s, btinfo->btrace);
> -  VEC_free (btrace_inst_s, btinfo->itrace);
> -  VEC_free (btrace_func_s, btinfo->ftrace);
> +  it = btinfo->begin;
> +  while (it != NULL)
> +    {
> +      trash = it;
> +      it = it->flow.next;
> +
> +      xfree (trash);
> +    }
> +
> +  btinfo->begin = NULL;
> +  btinfo->end = NULL;
>  
> -  btinfo->btrace = NULL;
> -  btinfo->itrace = NULL;
> -  btinfo->ftrace = NULL;
> +  xfree (btinfo->insn_history);
> +  xfree (btinfo->call_history);
> +
> +  btinfo->insn_history = NULL;
> +  btinfo->call_history = NULL;
>  }
>  
>  /* See btrace.h.  */
> @@ -541,3 +875,493 @@ parse_xml_btrace (const char *buffer)
>  
>    return btrace;
>  }
> +
> +/* See btrace.h.  */
> +
> +const struct btrace_insn *
> +btrace_insn_get (const struct btrace_insn_iterator *it)
> +{
> +  const struct btrace_function *bfun;
> +  unsigned int index, end;
> +
> +  if (it == NULL)
> +    return NULL;

I do not see this style in GDB and IMO it can delay bug report from where it
occured.  Either gdb_assert (it != NULL); or just to leave it crashing below.

I do not see any existing caller to depend on passing NULL.


> +
> +  index = it->index;
> +  bfun = it->function;
> +  if (bfun == NULL)
> +    return NULL;

btrace_insn_iterator::function does not state if NULL is allowed and its
meaning in such case.  btrace_call_get description states "NULL if the
interator points past the end of the branch trace." but I do not see it could
be set to NULL in any current code (expecting it was so in older code).
btrace_insn_next returns the last instruction not the last+1 pointer.

IMO it should be stated btrace_insn_iterator::function can never be NULL and
here should be either gdb_assert (bfun != NULL); or just nothing, like above.


> +
> +  /* The index is within the bounds of this function's instruction vector.  */
> +  end = VEC_length (btrace_insn_s, bfun->insn);
> +  gdb_assert (0 < end);
> +  gdb_assert (index < end);
> +
> +  return VEC_index (btrace_insn_s, bfun->insn, index);
> +}
> +
> +/* See btrace.h.  */
> +
> +unsigned int
> +btrace_insn_number (const struct btrace_insn_iterator *it)
> +{
> +  const struct btrace_function *bfun;
> +
> +  if (it == NULL)
> +    return 0;

Like in btrace_insn_get.


> +
> +  bfun = it->function;
> +  if (bfun == NULL)
> +    return 0;

Like in btrace_insn_get.


> +
> +  return bfun->insn_offset + it->index;
> +}
> +
> +/* See btrace.h.  */
> +
> +void
> +btrace_insn_begin (struct btrace_insn_iterator *it,
> +		   const struct btrace_thread_info *btinfo)
> +{
> +  const struct btrace_function *bfun;
> +
> +  bfun = btinfo->begin;
> +  if (bfun == NULL)
> +    error (_("No trace."));
> +
> +  it->function = bfun;
> +  it->index = 0;
> +}
> +
> +/* See btrace.h.  */
> +
> +void
> +btrace_insn_end (struct btrace_insn_iterator *it,
> +		 const struct btrace_thread_info *btinfo)
> +{
> +  const struct btrace_function *bfun;
> +  unsigned int length;
> +
> +  bfun = btinfo->end;
> +  if (bfun == NULL)
> +    error (_("No trace."));
> +
> +  /* The last instruction in the last function is the current instruction.
> +     We point to it - it is one past the end of the execution trace.  */
> +  length = VEC_length (btrace_insn_s, bfun->insn);
> +
> +  it->function = bfun;
> +  it->index = length - 1;
> +}
> +
> +/* See btrace.h.  */
> +
> +unsigned int
> +btrace_insn_next (struct btrace_insn_iterator *it, unsigned int stride)
> +{
> +  const struct btrace_function *bfun;
> +  unsigned int index, steps;
> +
> +  if (it == NULL)
> +    return 0;

Like in btrace_insn_get.


> +
> +  bfun = it->function;
> +  if (bfun == NULL)
> +    return 0;

Like in btrace_insn_get.


> +
> +  steps = 0;
> +  index = it->index;
> +
> +  while (stride != 0)
> +    {
> +      unsigned int end, space, adv;
> +
> +      end = VEC_length (btrace_insn_s, bfun->insn);
> +
> +      gdb_assert (0 < end);
> +      gdb_assert (index < end);
> +
> +      /* Compute the number of instructions remaining in this segment.  */
> +      space = end - index;
> +
> +      /* Advance the iterator as far as possible within this segment.  */
> +      adv = min (space, stride);
> +      stride -= adv;
> +      index += adv;
> +      steps += adv;
> +
> +      /* Move to the next function if we're at the end of this one.  */
> +      if (index == end)
> +	{
> +	  const struct btrace_function *next;
> +
> +	  next = bfun->flow.next;
> +	  if (next == NULL)
> +	    {
> +	      /* We stepped past the last function.
> +
> +		 Let's adjust the index to point to the last instruction in
> +		 the previous function.  */
> +	      index -= 1;
> +	      steps -= 1;
> +	      break;
> +	    }
> +
> +	  /* We now point to the first instruction in the new function.  */
> +	  bfun = next;
> +	  index = 0;
> +	}
> +
> +      /* We did make progress.  */
> +      gdb_assert (adv > 0);
> +    }
> +
> +  /* Update the iterator.  */
> +  it->function = bfun;
> +  it->index = index;
> +
> +  return steps;
> +}
> +
> +/* See btrace.h.  */
> +
> +unsigned int
> +btrace_insn_prev (struct btrace_insn_iterator *it, unsigned int stride)
> +{
> +  const struct btrace_function *bfun;
> +  unsigned int index, steps;
> +
> +  if (it == NULL)
> +    return 0;

Like in btrace_insn_get.


> +
> +  bfun = it->function;
> +  if (bfun == NULL)
> +    return 0;

Like in btrace_insn_get.


> +
> +  steps = 0;
> +  index = it->index;
> +
> +  while (stride != 0)
> +    {
> +      unsigned int adv;
> +
> +      /* Move to the previous function if we're at the start of this one.  */
> +      if (index == 0)
> +	{
> +	  const struct btrace_function *prev;
> +
> +	  prev = bfun->flow.prev;
> +	  if (prev == NULL)
> +	    break;
> +
> +	  /* We point to one after the last instruction in the new function.  */
> +	  bfun = prev;
> +	  index = VEC_length (btrace_insn_s, bfun->insn);
> +
> +	  /* There is at least one instruction in this function segment.  */
> +	  gdb_assert (index > 0);
> +	}
> +
> +      /* Advance the iterator as far as possible within this segment.  */
> +      adv = min (index, stride);
> +      stride -= adv;
> +      index -= adv;
> +      steps += adv;
> +
> +      /* We did make progress.  */
> +      gdb_assert (adv > 0);
> +    }
> +
> +  /* Update the iterator.  */
> +  it->function = bfun;
> +  it->index = index;
> +
> +  return steps;
> +}
> +
> +/* See btrace.h.  */
> +
> +int
> +btrace_insn_cmp (const struct btrace_insn_iterator *lhs,
> +		 const struct btrace_insn_iterator *rhs)
> +{
> +  unsigned int lnum, rnum;
> +
> +  lnum = btrace_insn_number (lhs);
> +  rnum = btrace_insn_number (rhs);
> +
> +  return (int) (lnum - rnum);
> +}
> +
> +/* See btrace.h.  */
> +
> +int
> +btrace_find_insn_by_number (struct btrace_insn_iterator *it,
> +			    const struct btrace_thread_info *btinfo,
> +			    unsigned int number)
> +{
> +  const struct btrace_function *bfun;
> +  unsigned int end;
> +
> +  for (bfun = btinfo->end; bfun != NULL; bfun = bfun->flow.prev)
> +    if (bfun->insn_offset <= number)
> +      break;
> +
> +  if (bfun == NULL)
> +    return 0;
> +
> +  end = bfun->insn_offset + VEC_length (btrace_insn_s, bfun->insn);
> +  if (end <= number)
> +    return 0;
> +
> +  it->function = bfun;
> +  it->index = number - bfun->insn_offset;
> +
> +  return 1;
> +}
> +
> +/* See btrace.h.  */
> +
> +const struct btrace_function *
> +btrace_call_get (const struct btrace_call_iterator *it)
> +{
> +  if (it == NULL)
> +    return NULL;

Like in btrace_insn_get.


> +
> +  return it->function;

Like in btrace_insn_get if you decide for gdb_assert (it->function != NULL);.


> +}
> +
> +/* See btrace.h.  */
> +
> +unsigned int
> +btrace_call_number (const struct btrace_call_iterator *it)
> +{
> +  const struct btrace_thread_info *btinfo;
> +  const struct btrace_function *bfun;
> +  unsigned int insns;
> +
> +  if (it == NULL)
> +    return 0;

Like in btrace_insn_get.


> +
> +  btinfo = it->btinfo;
> +  if (btinfo == NULL)
> +    return 0;

Similiarly btrace_call_iterator::btinfo does not state if it can be NULL and
consequently here to code should rather gdb_assert it (or ignore it all).


> +
> +  bfun = it->function;
> +  if (bfun != NULL)
> +    return bfun->number;

Similiarly btrace_call_iterator::function does not state if it can be NULL and
consequently here to code should rather gdb_assert it (or ignore it all).


> +
> +  /* For the end iterator, i.e. bfun == NULL, we return one more than the
> +     number of the last function.  */
> +  bfun = btinfo->end;
> +  insns = VEC_length (btrace_insn_s, bfun->insn);
> +
> +  /* If the function contains only a single instruction (i.e. the current
> +     instruction), it will be skipped and its number is already the number
> +     we seek.  */
> +  if (insns == 1)
> +    return bfun->number;
> +
> +  /* Otherwise, return one more than the number of the last function.  */
> +  return bfun->number + 1;
> +}
> +
> +/* See btrace.h.  */
> +
> +void
> +btrace_call_begin (struct btrace_call_iterator *it,
> +		   const struct btrace_thread_info *btinfo)
> +{
> +  const struct btrace_function *bfun;
> +
> +  bfun = btinfo->begin;
> +  if (bfun == NULL)
> +    error (_("No trace."));
> +
> +  it->btinfo = btinfo;
> +  it->function = bfun;
> +}
> +
> +/* See btrace.h.  */
> +
> +void
> +btrace_call_end (struct btrace_call_iterator *it,
> +		 const struct btrace_thread_info *btinfo)
> +{
> +  const struct btrace_function *bfun;
> +
> +  bfun = btinfo->end;
> +  if (bfun == NULL)
> +    error (_("No trace."));
> +
> +  it->btinfo = btinfo;
> +  it->function = NULL;
> +}
> +
> +/* See btrace.h.  */
> +
> +unsigned int
> +btrace_call_next (struct btrace_call_iterator *it, unsigned int stride)
> +{
> +  const struct btrace_function *bfun;
> +  unsigned int steps;
> +
> +  if (it == NULL)
> +    return 0;

Like in btrace_insn_get.


> +
> +  bfun = it->function;
> +  steps = 0;
> +  while (bfun != NULL)
> +    {
> +      const struct btrace_function *next;
> +      unsigned int insns;
> +
> +      next = bfun->flow.next;
> +      if (next == NULL)
> +	{
> +	  /* Ignore the last function if it only contains a single
> +	     (i.e. the current) instruction.  */
> +	  insns = VEC_length (btrace_insn_s, bfun->insn);
> +	  if (insns == 1)
> +	    steps -= 1;
> +	}
> +
> +      if (stride == steps)
> +	break;
> +
> +      bfun = next;
> +      steps += 1;
> +    }
> +
> +  it->function = bfun;
> +  return steps;
> +}
> +
> +/* See btrace.h.  */
> +
> +unsigned int
> +btrace_call_prev (struct btrace_call_iterator *it, unsigned int stride)
> +{
> +  const struct btrace_thread_info *btinfo;
> +  const struct btrace_function *bfun;
> +  unsigned int steps;
> +
> +  if (it == NULL)
> +    return 0;

Like in btrace_insn_get.


> +
> +  bfun = it->function;
> +  steps = 0;
> +
> +  if (bfun == NULL)
> +    {
> +      unsigned int insns;
> +
> +      btinfo = it->btinfo;
> +      if (btinfo == NULL)
> +	return 0;

Like in btrace_insn_get.


> +
> +      bfun = btinfo->end;
> +      if (bfun == NULL)
> +	return 0;
> +
> +      /* Ignore the last function if it only contains a single
> +	 (i.e. the current) instruction.  */
> +      insns = VEC_length (btrace_insn_s, bfun->insn);
> +      if (insns == 1)
> +	bfun = bfun->flow.prev;
> +
> +      if (bfun == NULL)
> +	return 0;
> +
> +      steps += 1;
> +    }
> +
> +  while (steps < stride)
> +    {
> +      const struct btrace_function *prev;
> +
> +      prev = bfun->flow.prev;
> +      if (prev == NULL)
> +	break;
> +
> +      bfun = prev;
> +      steps += 1;
> +    }
> +
> +  it->function = bfun;
> +  return steps;
> +}
> +
> +/* See btrace.h.  */
> +
> +int
> +btrace_call_cmp (const struct btrace_call_iterator *lhs,
> +		 const struct btrace_call_iterator *rhs)
> +{
> +  unsigned int lnum, rnum;
> +
> +  lnum = btrace_call_number (lhs);
> +  rnum = btrace_call_number (rhs);
> +
> +  return (int) (lnum - rnum);
> +}
> +
> +/* See btrace.h.  */
> +
> +int
> +btrace_find_call_by_number (struct btrace_call_iterator *it,
> +			    const struct btrace_thread_info *btinfo,
> +			    unsigned int number)
> +{
> +  const struct btrace_function *bfun;
> +
> +  if (btinfo == NULL)
> +    return 0;

Like in btrace_insn_get.


> +
> +  for (bfun = btinfo->end; bfun != NULL; bfun = bfun->flow.prev)
> +    {
> +      unsigned int bnum;
> +
> +      bnum = bfun->number;
> +      if (number == bnum)
> +	{
> +	  it->btinfo = btinfo;
> +	  it->function = bfun;
> +	  return 1;
> +	}
> +
> +      /* Functions are ordered and numbered consecutively.  We could bail out
> +	 earlier.  On the other hand, it is very unlikely that we search for
> +	 a nonexistent function.  */
> +  }
> +
> +  return 0;
> +}
> +
> +/* See btrace.h.  */
> +
> +void
> +btrace_set_insn_history (struct btrace_thread_info *btinfo,
> +			 const struct btrace_insn_iterator *begin,
> +			 const struct btrace_insn_iterator *end)
> +{
> +  if (btinfo->insn_history == NULL)
> +    btinfo->insn_history = xzalloc (sizeof (*btinfo->insn_history));
> +
> +  btinfo->insn_history->begin = *begin;
> +  btinfo->insn_history->end = *end;
> +}
> +
> +/* See btrace.h.  */
> +
> +void
> +btrace_set_call_history (struct btrace_thread_info *btinfo,
> +			 const struct btrace_call_iterator *begin,
> +			 const struct btrace_call_iterator *end)
> +{

gdb_assert (begin->btinfo == end->btinfo);


> +  if (btinfo->call_history == NULL)
> +    btinfo->call_history = xzalloc (sizeof (*btinfo->call_history));
> +
> +  btinfo->call_history->begin = *begin;
> +  btinfo->call_history->end = *end;
> +}
> diff --git a/gdb/btrace.h b/gdb/btrace.h
> index bd8425d..a3322d2 100644
> --- a/gdb/btrace.h
> +++ b/gdb/btrace.h
> @@ -29,63 +29,124 @@
>  #include "btrace-common.h"
>  
>  struct thread_info;
> +struct btrace_function;
>  
>  /* A branch trace instruction.
>  
>     This represents a single instruction in a branch trace.  */
> -struct btrace_inst
> +struct btrace_insn
>  {
>    /* The address of this instruction.  */
>    CORE_ADDR pc;
>  };
>  
> -/* A branch trace function.
> +/* A vector of branch trace instructions.  */
> +typedef struct btrace_insn btrace_insn_s;
> +DEF_VEC_O (btrace_insn_s);
> +
> +/* A doubly-linked list of branch trace function segments.  */
> +struct btrace_func_link
> +{
> +  struct btrace_function *prev;
> +  struct btrace_function *next;
> +};
> +
> +/* Flags for btrace function segments.  */
> +enum btrace_function_flag
> +{
> +  /* The 'up' link interpretation.
> +     If set, it points to the function segment we returned to.
> +     If clear, it points to the function segment we called from.  */
> +  BFUN_UP_LINKS_TO_RET = (1 << 0),
> +
> +  /* The 'up' link points to a tail call.  This obviously only makes sense
> +     if bfun_up_links_to_ret is clear.  */
> +  BFUN_UP_LINKS_TO_TAILCALL = (1 << 1)
> +};
> +
> +/* A branch trace function segment.
>  
>     This represents a function segment in a branch trace, i.e. a consecutive
> -   number of instructions belonging to the same function.  */
> -struct btrace_func
> +   number of instructions belonging to the same function.
> +
> +   We do not allow function segments without any instructions.  */
> +struct btrace_function
>  {
> -  /* The full and minimal symbol for the function.  One of them may be NULL.  */
> +  /* The full and minimal symbol for the function.  Both may be NULL.  */
>    struct minimal_symbol *msym;
>    struct symbol *sym;
>  
> +  /* The previous and next segment belonging to the same function.  */

Initially I did not understand what this field is good for, maybe this would
help:

/* Function execution making one call (of some other function) will consist
   of two segments.  */


> +  struct btrace_func_link segment;
> +
> +  /* The previous and next function in control flow order.  */
> +  struct btrace_func_link flow;
> +
> +  /* The directly preceding function segment in a (fake) call stack.  */
> +  struct btrace_function *up;
> +
> +  /* The instructions in this function segment.  */

ftrace_find_call contains /* We do not allow empty function segments.  */
so a similar message could be also here.


> +  VEC (btrace_insn_s) *insn;
> +
> +  /* The instruction number offset for the first instruction in this
> +     function segment.  */
> +  unsigned int insn_offset;
> +
> +  /* The function number in control-flow order.  */
> +  unsigned int number;
> +
> +  /* The function level in a back trace across the entire branch trace.
> +     A caller's level is one higher than the level of its callee.
> +
> +     Levels can be negative if we see returns for which we have not seen
> +     the corresponding calls.  The branch trace thread information provides
> +     a fixup to normalize function levels so the smallest level is zero.  */
> +  int level;
> +
>    /* The source line range of this function segment (both inclusive).  */
>    int lbegin, lend;
>  
> -  /* The instruction number range in the instruction trace corresponding
> -     to this function segment (both inclusive).  */
> -  unsigned int ibegin, iend;
> +  /* A bit-vector of btrace_function_flag.  */
> +  unsigned int flags;

FLAGS should be enum btrace_function_flag (it is ORed bitmask but GDB displays
enum ORed bitmasks appropriately).


>  };
>  
> -/* Branch trace may also be represented as a vector of:
> +/* A branch trace instruction iterator.  */
> +struct btrace_insn_iterator
> +{
> +  /* The branch trace function segment containing the instruction.  */

Please state if it can be NULL.  (IMO it cannot as discussed elsewhere.)


> +  const struct btrace_function *function;
>  
> -   - branch trace instructions starting with the oldest instruction.
> -   - branch trace functions starting with the oldest function.  */
> -typedef struct btrace_inst btrace_inst_s;
> -typedef struct btrace_func btrace_func_s;
> +  /* The index into the function segment's instruction vector.  */
> +  unsigned int index;
> +};
>  
> -/* Define functions operating on branch trace vectors.  */
> -DEF_VEC_O (btrace_inst_s);
> -DEF_VEC_O (btrace_func_s);
> +/* A branch trace function call iterator.  */
> +struct btrace_call_iterator
> +{
> +  /* The branch trace information for this thread.  */
> +  const struct btrace_thread_info *btinfo;

Please state if it can be NULL (I do not think so).


> +
> +  /* The branch trace function segment.
> +     This will be NULL for the iterator pointing to the end of the trace.  */

btrace_call_next can return NULL in function while btrace_insn_next returns
rather the very last of all instructions.  Is there a reason for this
difference?


> +  const struct btrace_function *function;
> +};
>  
>  /* Branch trace iteration state for "record instruction-history".  */
> -struct btrace_insn_iterator
> +struct btrace_insn_history
>  {
> -  /* The instruction index range from begin (inclusive) to end (exclusive)
> -     that has been covered last time.
> -     If end < begin, the branch trace has just been updated.  */
> -  unsigned int begin;
> -  unsigned int end;
> +  /* The branch trace instruction range from begin (inclusive) to
> +     end (exclusive) that has been covered last time.  */

BEGIN and END should be uppercased:
http://www.gnu.org/prep/standards/standards.html
	The comment on a function is much clearer if you use the argument
	names to speak about the argument values. The variable name itself
	should be lower case, but write it in upper case when you are speaking
	about the value rather than the variable itself. Thus, “the inode
	number NODE_NUM” rather than “an inode”. 
(It talks about parameters but still...)


> +  struct btrace_insn_iterator begin;
> +  struct btrace_insn_iterator end;
>  };
>  
>  /* Branch trace iteration state for "record function-call-history".  */
> -struct btrace_func_iterator
> +struct btrace_call_history
>  {
> -  /* The function index range from begin (inclusive) to end (exclusive)
> -     that has been covered last time.
> -     If end < begin, the branch trace has just been updated.  */
> -  unsigned int begin;
> -  unsigned int end;
> +  /* The branch trace function range from begin (inclusive) to end (exclusive)
> +     that has been covered last time.  */

BEGIN and END should be uppercased.

> +  struct btrace_call_iterator begin;
> +  struct btrace_call_iterator end;
>  };
>  
>  /* Branch trace information per thread.
> @@ -103,16 +164,23 @@ struct btrace_thread_info
>       the underlying architecture.  */
>    struct btrace_target_info *target;
>  
> -  /* The current branch trace for this thread.  */
> -  VEC (btrace_block_s) *btrace;
> -  VEC (btrace_inst_s) *itrace;
> -  VEC (btrace_func_s) *ftrace;
> +  /* The current branch trace for this thread (both inclusive).

/* Either both or neither of the two pointers is NULL.  */


> +
> +     The last instruction of END is the current instruction, which is not
> +     part of the execution history.  */
> +  struct btrace_function *begin;
> +  struct btrace_function *end;
> +
> +  /* The function level offset.  When added to each function's level,
> +     this normalizes the function levels such that the smallest level
> +     becomes zero.  */
> +  int level;
>  
>    /* The instruction history iterator.  */
> -  struct btrace_insn_iterator insn_iterator;
> +  struct btrace_insn_history *insn_history;
>  
>    /* The function call history iterator.  */
> -  struct btrace_func_iterator func_iterator;
> +  struct btrace_call_history *call_history;
>  };
>  
>  /* Enable branch tracing for a thread.  */
> @@ -139,4 +207,98 @@ extern void btrace_free_objfile (struct objfile *);
>  /* Parse a branch trace xml document into a block vector.  */
>  extern VEC (btrace_block_s) *parse_xml_btrace (const char*);
>  
> +/* Dereference a branch trace instruction iterator.  Return a pointer to the
> +   instruction the iterator points to

> or NULL if the interator does not point
> +   to a valid instruction.  */

This part may get removed depending on my comments.


> +extern const struct btrace_insn *
> + btrace_insn_get (const struct btrace_insn_iterator *);

There should be two spaces in such case, not one.


> +
> +/* Return the instruction number for a branch trace iterator.
> +   Returns one past the maximum instruction number for the end iterator.
> +   Returns zero if the iterator does not point to a valid instruction.  */
> +extern unsigned int btrace_insn_number (const struct btrace_insn_iterator *);
> +
> +/* Initialize a branch trace instruction iterator to point to the begin/end of
> +   the branch trace.  Throws an error if there is no branch trace.  */
> +extern void btrace_insn_begin (struct btrace_insn_iterator *,
> +			       const struct btrace_thread_info *);
> +extern void btrace_insn_end (struct btrace_insn_iterator *,
> +			     const struct btrace_thread_info *);
> +
> +/* Increment/decrement a branch trace instruction iterator.  Return the number
> +   of instructions by which the instruction iterator has been advanced.
> +   Returns zero, if the operation failed.  */

STRIDE is not described.


> +extern unsigned int btrace_insn_next (struct btrace_insn_iterator *,
> +				      unsigned int stride);
> +extern unsigned int btrace_insn_prev (struct btrace_insn_iterator *,
> +				      unsigned int stride);
> +
> +/* Compare two branch trace instruction iterators.
> +   Return a negative number if LHS < RHS.
> +   Return zero if LHS == RHS.
> +   Return a positive number if LHS > RHS.  */
> +extern int btrace_insn_cmp (const struct btrace_insn_iterator *lhs,
> +			    const struct btrace_insn_iterator *rhs);
> +
> +/* Find an instruction in the function branch trace by its number.
> +   If the instruction is found, initialize the branch trace instruction
> +   iterator to point to this instruction and return non-zero.
> +   Return zero, otherwise.  */
> +extern int btrace_find_insn_by_number (struct btrace_insn_iterator *,
> +				       const struct btrace_thread_info *,
> +				       unsigned int number);
> +
> +/* Dereference a branch trace call iterator.  Return a pointer to the
> +   function the iterator points to or NULL if the interator points past
> +   the end of the branch trace.  */
> +extern const struct btrace_function *
> + btrace_call_get (const struct btrace_call_iterator *);

There should be two spaces in such case, not one.


> +
> +/* Return the function number for a branch trace call iterator.
> +   Returns one past the maximum function number for the end iterator.
> +   Returns zero if the iterator does not point to a valid function.  */
> +extern unsigned int btrace_call_number (const struct btrace_call_iterator *);
> +
> +/* Initialize a branch trace call iterator to point to the begin/end of
> +   the branch trace.  Throws an error if there is no branch trace.  */
> +extern void btrace_call_begin (struct btrace_call_iterator *,
> +			       const struct btrace_thread_info *);
> +extern void btrace_call_end (struct btrace_call_iterator *,
> +			     const struct btrace_thread_info *);
> +
> +/* Increment/decrement a branch trace call  iterator.  Return the number

s/call  iterator/call iterator/


> +   of function segments s by which the call iterator has been advanced.
> +   Returns zero, if the operation failed.  */
> +extern unsigned int btrace_call_next (struct btrace_call_iterator *,
> +				      unsigned int stride);
> +extern unsigned int btrace_call_prev (struct btrace_call_iterator *,
> +				      unsigned int stride);
> +
> +/* Compare two branch trace call iterators.
> +   Return a negative number if LHS < RHS.
> +   Return zero if LHS == RHS.
> +   Return a positive number if LHS > RHS.  */
> +extern int btrace_call_cmp (const struct btrace_call_iterator *lhs,
> +			    const struct btrace_call_iterator *rhs);
> +
> +/* Find a function in the function branch trace by its number.
> +   If the function is found, initialize the branch trace call
> +   iterator to point to this function and return non-zero.
> +   Return zero, otherwise.  */
> +extern int btrace_find_call_by_number (struct btrace_call_iterator *,
> +				       const struct btrace_thread_info *,
> +				       unsigned int number);
> +
> +/* Set the branch trace instruction history from BEGIN (inclusive) to
> +   END (exclusive).  */
> +extern void btrace_set_insn_history (struct btrace_thread_info *,
> +				     const struct btrace_insn_iterator *begin,
> +				     const struct btrace_insn_iterator *end);
> +
> +/* Set the branch trace function call history from BEGIN (inclusive) to
> +   END (exclusive).  */
> +extern void btrace_set_call_history (struct btrace_thread_info *,
> +				     const struct btrace_call_iterator *begin,
> +				     const struct btrace_call_iterator *end);
> +
>  #endif /* BTRACE_H */
> diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
> index 68f40c8..2e7c639 100644
> --- a/gdb/record-btrace.c
> +++ b/gdb/record-btrace.c
> @@ -74,7 +74,7 @@ require_btrace (void)
>  
>    btinfo = &tp->btrace;
>  
> -  if (VEC_empty (btrace_inst_s, btinfo->itrace))
> +  if (btinfo->begin == NULL)
>      error (_("No trace."));
>  
>    return btinfo;
> @@ -206,7 +206,7 @@ record_btrace_info (void)
>  {
>    struct btrace_thread_info *btinfo;
>    struct thread_info *tp;
> -  unsigned int insts, funcs;
> +  unsigned int insns, calls;
>  
>    DEBUG ("info");
>  
> @@ -216,12 +216,26 @@ record_btrace_info (void)
>  
>    btrace_fetch (tp);
>  
> +  insns = 0;
> +  calls = 0;
> +
>    btinfo = &tp->btrace;
> -  insts = VEC_length (btrace_inst_s, btinfo->itrace);
> -  funcs = VEC_length (btrace_func_s, btinfo->ftrace);
> +  if (btinfo->begin != NULL)
> +    {
> +      struct btrace_call_iterator call;
> +      struct btrace_insn_iterator insn;
> +
> +      btrace_call_end (&call, btinfo);
> +      btrace_call_prev (&call, 1);
> +      calls = btrace_call_number (&call) + 1;
> +
> +      btrace_insn_end (&insn, btinfo);
> +      btrace_insn_prev (&insn, 1);
> +      insns = btrace_insn_number (&insn) + 1;
> +    }
>  
>    printf_unfiltered (_("Recorded %u instructions in %u functions for thread "
> -		       "%d (%s).\n"), insts, funcs, tp->num,
> +		       "%d (%s).\n"), insns, calls, tp->num,
>  		     target_pid_to_str (tp->ptid));
>  }
>  
> @@ -236,27 +250,31 @@ ui_out_field_uint (struct ui_out *uiout, const char *fld, unsigned int val)
>  /* Disassemble a section of the recorded instruction trace.  */
>  
>  static void
> -btrace_insn_history (struct btrace_thread_info *btinfo, struct ui_out *uiout,
> -		     unsigned int begin, unsigned int end, int flags)
> +btrace_insn_history (struct ui_out *uiout,
> +		     const struct btrace_insn_iterator *begin,
> +		     const struct btrace_insn_iterator *end, int flags)
>  {
>    struct gdbarch *gdbarch;
> -  struct btrace_inst *inst;
> -  unsigned int idx;
> +  struct btrace_insn_iterator it;
>  
> -  DEBUG ("itrace (0x%x): [%u; %u[", flags, begin, end);
> +  DEBUG ("itrace (0x%x): [%u; %u)", flags, btrace_insn_number (begin),
> +	 btrace_insn_number (end));
>  
>    gdbarch = target_gdbarch ();
>  
> -  for (idx = begin; VEC_iterate (btrace_inst_s, btinfo->itrace, idx, inst)
> -	 && idx < end; ++idx)
> +  for (it = *begin; btrace_insn_cmp (&it, end) != 0; btrace_insn_next (&it, 1))
>      {
> +      const struct btrace_insn *insn;
> +
> +      insn = btrace_insn_get (&it);
> +
>        /* Print the instruction index.  */
> -      ui_out_field_uint (uiout, "index", idx);
> +      ui_out_field_uint (uiout, "index", btrace_insn_number (&it));
>        ui_out_text (uiout, "\t");
>  
>        /* Disassembly with '/m' flag may not produce the expected result.
>  	 See PR gdb/11833.  */
> -      gdb_disassembly (gdbarch, uiout, NULL, flags, 1, inst->pc, inst->pc + 1);
> +      gdb_disassembly (gdbarch, uiout, NULL, flags, 1, insn->pc, insn->pc + 1);
>      }
>  }
>  
> @@ -266,72 +284,62 @@ static void
>  record_btrace_insn_history (int size, int flags)
>  {
>    struct btrace_thread_info *btinfo;
> +  struct btrace_insn_history *history;
> +  struct btrace_insn_iterator begin, end;
>    struct cleanup *uiout_cleanup;
>    struct ui_out *uiout;
> -  unsigned int context, last, begin, end;
> +  unsigned int context, covered;
>  
>    uiout = current_uiout;
>    uiout_cleanup = make_cleanup_ui_out_tuple_begin_end (uiout,
>  						       "insn history");
> -  btinfo = require_btrace ();
> -  last = VEC_length (btrace_inst_s, btinfo->itrace);
> -
>    context = abs (size);
> -  begin = btinfo->insn_iterator.begin;
> -  end = btinfo->insn_iterator.end;
> -
> -  DEBUG ("insn-history (0x%x): %d, prev: [%u; %u[", flags, size, begin, end);
> -
>    if (context == 0)
>      error (_("Bad record instruction-history-size."));
>  
> -  /* We start at the end.  */
> -  if (end < begin)
> -    {
> -      /* Truncate the context, if necessary.  */
> -      context = min (context, last);
> -
> -      end = last;
> -      begin = end - context;
> -    }
> -  else if (size < 0)
> +  btinfo = require_btrace ();
> +  history = btinfo->insn_history;
> +  if (history == NULL)
>      {
> -      if (begin == 0)
> -	{
> -	  printf_unfiltered (_("At the start of the branch trace record.\n"));
> -
> -	  btinfo->insn_iterator.end = 0;
> -	  return;
> -	}
> +      /* No matter the direction, we start with the tail of the trace.  */
> +      btrace_insn_end (&begin, btinfo);
> +      end = begin;
>  
> -      /* Truncate the context, if necessary.  */
> -      context = min (context, begin);
> +      DEBUG ("insn-history (0x%x): %d", flags, size);
>  
> -      end = begin;
> -      begin -= context;
> +      covered = btrace_insn_prev (&begin, context);
>      }
>    else
>      {
> -      if (end == last)
> -	{
> -	  printf_unfiltered (_("At the end of the branch trace record.\n"));
> +      begin = history->begin;
> +      end = history->end;
>  
> -	  btinfo->insn_iterator.begin = last;
> -	  return;
> -	}
> +      DEBUG ("insn-history (0x%x): %d, prev: [%u; %u)", flags, size,
> +	     btrace_insn_number (&begin), btrace_insn_number (&end));
>  
> -      /* Truncate the context, if necessary.  */
> -      context = min (context, last - end);
> -
> -      begin = end;
> -      end += context;
> +      if (size < 0)
> +	{
> +	  end = begin;
> +	  covered = btrace_insn_prev (&begin, context);
> +	}
> +      else
> +	{
> +	  begin = end;
> +	  covered = btrace_insn_next (&end, context);
> +	}
>      }
>  
> -  btrace_insn_history (btinfo, uiout, begin, end, flags);
> -
> -  btinfo->insn_iterator.begin = begin;
> -  btinfo->insn_iterator.end = end;
> +  if (covered > 0)
> +    btrace_insn_history (uiout, &begin, &end, flags);
> +  else
> +    {
> +      if (size < 0)
> +	printf_unfiltered (_("At the start of the branch trace record.\n"));
> +      else
> +	printf_unfiltered (_("At the end of the branch trace record.\n"));
> +    }
>  
> +  btrace_set_insn_history (btinfo, &begin, &end);
>    do_cleanups (uiout_cleanup);
>  }
>  
> @@ -341,39 +349,41 @@ static void
>  record_btrace_insn_history_range (ULONGEST from, ULONGEST to, int flags)
>  {
>    struct btrace_thread_info *btinfo;
> +  struct btrace_insn_history *history;
> +  struct btrace_insn_iterator begin, end;
>    struct cleanup *uiout_cleanup;
>    struct ui_out *uiout;
> -  unsigned int last, begin, end;
> +  unsigned int low, high;
> +  int found;
>  
>    uiout = current_uiout;
>    uiout_cleanup = make_cleanup_ui_out_tuple_begin_end (uiout,
>  						       "insn history");
> -  btinfo = require_btrace ();
> -  last = VEC_length (btrace_inst_s, btinfo->itrace);
> +  low = (unsigned int) from;
> +  high = (unsigned int) to;

I do not see a reason for this cast, it is even not signed vs. unsigned.


>  
> -  begin = (unsigned int) from;
> -  end = (unsigned int) to;
> -
> -  DEBUG ("insn-history (0x%x): [%u; %u[", flags, begin, end);
> +  DEBUG ("insn-history (0x%x): [%u; %u)", flags, low, high);
>  
>    /* Check for wrap-arounds.  */
> -  if (begin != from || end != to)
> +  if (low != from || high != to)
>      error (_("Bad range."));
>  
> -  if (end <= begin)
> +  if (high <= low)
>      error (_("Bad range."));

Function description says:
    /* Disassemble a section of the recorded execution trace from instruction
       BEGIN (inclusive) to instruction END (exclusive).  */

But it beahves as if END was inclusive.  Or I do not understand something?
(gdb) record instruction-history 1925,1926
1925	   0x00007ffff62f6afc <memset+28>:	ja     0x7ffff62f6b30 <memset+80>
1926	   0x00007ffff62f6afe <memset+30>:	cmp    $0x10,%rdx

If it should be inclusive then LOW == HIGH should be allowed:
(gdb) record instruction-history 1925,1925
Bad range.

Not in this patch (in some later one) but there is also:
      /* We want both begin and end to be inclusive.  */
      btrace_insn_next (&end, 1);

which contradicts the description of to_insn_history_range.


>  
> -  if (last <= begin)
> -    error (_("Range out of bounds."));
> +  btinfo = require_btrace ();
>  
> -  /* Truncate the range, if necessary.  */
> -  if (last < end)
> -    end = last;
> +  found = btrace_find_insn_by_number (&begin, btinfo, low);
> +  if (found == 0)
> +    error (_("Range out of bounds."));
>  
> -  btrace_insn_history (btinfo, uiout, begin, end, flags);
> +  /* Silently truncate the range, if necessary.  */
> +  found = btrace_find_insn_by_number (&end, btinfo, high);
> +  if (found == 0)
> +    btrace_insn_end (&end, btinfo);
>  
> -  btinfo->insn_iterator.begin = begin;
> -  btinfo->insn_iterator.end = end;
> +  btrace_insn_history (uiout, &begin, &end, flags);
> +  btrace_set_insn_history (btinfo, &begin, &end);
>  
>    do_cleanups (uiout_cleanup);
>  }


Unrelated to this patch but the function record_btrace_insn_history_from does
not need to be virtualized.  It does not access any internals of
record-btrace.c, it could be fully implemented in the superclass record.c and
to_insn_history_from could be deleted.

The same applies for record_btrace_call_history_from and to_call_history_from.


> @@ -412,23 +422,27 @@ record_btrace_insn_history_from (ULONGEST from, int size, int flags)
>  /* Print the instruction number range for a function call history line.  */
>  
>  static void
> -btrace_func_history_insn_range (struct ui_out *uiout, struct btrace_func *bfun)
> +btrace_call_history_insn_range (struct ui_out *uiout,
> +				const struct btrace_function *bfun)
>  {
> -  ui_out_field_uint (uiout, "insn begin", bfun->ibegin);
> +  unsigned int begin, end;
>  
> -  if (bfun->ibegin == bfun->iend)
> -    return;
> +  begin = bfun->insn_offset;
> +  end = begin + VEC_length (btrace_insn_s, bfun->insn);
>  
> +  ui_out_field_uint (uiout, "insn begin", begin);
>    ui_out_text (uiout, "-");
> -  ui_out_field_uint (uiout, "insn end", bfun->iend);
> +  ui_out_field_uint (uiout, "insn end", end);
>  }
>  
>  /* Print the source line information for a function call history line.  */
>  
>  static void
> -btrace_func_history_src_line (struct ui_out *uiout, struct btrace_func *bfun)
> +btrace_call_history_src_line (struct ui_out *uiout,
> +			      const struct btrace_function *bfun)
>  {
>    struct symbol *sym;
> +  int begin, end;
>  
>    sym = bfun->sym;
>    if (sym == NULL)
> @@ -437,54 +451,66 @@ btrace_func_history_src_line (struct ui_out *uiout, struct btrace_func *bfun)
>    ui_out_field_string (uiout, "file",
>  		       symtab_to_filename_for_display (sym->symtab));
>  
> -  if (bfun->lend == 0)
> +  begin = bfun->lbegin;
> +  end = bfun->lend;
> +
> +  if (end < begin)
>      return;
>  
>    ui_out_text (uiout, ":");
> -  ui_out_field_int (uiout, "min line", bfun->lbegin);
> +  ui_out_field_int (uiout, "min line", begin);
>  
> -  if (bfun->lend == bfun->lbegin)
> +  if (end == begin)
>      return;
>  
>    ui_out_text (uiout, "-");
> -  ui_out_field_int (uiout, "max line", bfun->lend);
> +  ui_out_field_int (uiout, "max line", end);
>  }
>  
>  /* Disassemble a section of the recorded function trace.  */
>  
>  static void
> -btrace_func_history (struct btrace_thread_info *btinfo, struct ui_out *uiout,
> -		     unsigned int begin, unsigned int end,
> +btrace_call_history (struct ui_out *uiout,
> +		     const struct btrace_call_iterator *begin,
> +		     const struct btrace_call_iterator *end,
>  		     enum record_print_flag flags)
>  {
> -  struct btrace_func *bfun;
> -  unsigned int idx;
> +  struct btrace_call_iterator it;
>  
> -  DEBUG ("ftrace (0x%x): [%u; %u[", flags, begin, end);
> +  DEBUG ("ftrace (0x%x): [%u; %u)", flags, btrace_call_number (begin),
> +	 btrace_call_number (end));
>  
> -  for (idx = begin; VEC_iterate (btrace_func_s, btinfo->ftrace, idx, bfun)
> -	 && idx < end; ++idx)
> +  for (it = *begin; btrace_call_cmp (&it, end) != 0; btrace_call_next (&it, 1))

s/!= 0/< 0/ ?  Otherwise to put before:
	gdb_assert (btrace_call_cmp (begin, end) <= 0);


>      {
> +      const struct btrace_function *bfun;
> +      struct minimal_symbol *msym;
> +      struct symbol *sym;
> +
> +      bfun = btrace_call_get (&it);
> +      msym = bfun->msym;
> +      sym = bfun->sym;
> +
>        /* Print the function index.  */
> -      ui_out_field_uint (uiout, "index", idx);
> +      ui_out_field_uint (uiout, "index", bfun->number);
>        ui_out_text (uiout, "\t");
>  
>        if ((flags & RECORD_PRINT_INSN_RANGE) != 0)
>  	{
> -	  btrace_func_history_insn_range (uiout, bfun);
> +	  btrace_call_history_insn_range (uiout, bfun);
>  	  ui_out_text (uiout, "\t");
>  	}
>  
>        if ((flags & RECORD_PRINT_SRC_LINE) != 0)
>  	{
> -	  btrace_func_history_src_line (uiout, bfun);
> +	  btrace_call_history_src_line (uiout, bfun);
>  	  ui_out_text (uiout, "\t");
>  	}
>  
> -      if (bfun->sym != NULL)
> -	ui_out_field_string (uiout, "function", SYMBOL_PRINT_NAME (bfun->sym));
> -      else if (bfun->msym != NULL)
> -	ui_out_field_string (uiout, "function", SYMBOL_PRINT_NAME (bfun->msym));
> +      if (sym != NULL)
> +	ui_out_field_string (uiout, "function", SYMBOL_PRINT_NAME (sym));
> +      else if (msym != NULL)
> +	ui_out_field_string (uiout, "function", SYMBOL_PRINT_NAME (msym));
> +
>        ui_out_text (uiout, "\n");
>      }
>  }
> @@ -495,72 +521,62 @@ static void
>  record_btrace_call_history (int size, int flags)
>  {
>    struct btrace_thread_info *btinfo;
> +  struct btrace_call_history *history;
> +  struct btrace_call_iterator begin, end;
>    struct cleanup *uiout_cleanup;
>    struct ui_out *uiout;
> -  unsigned int context, last, begin, end;
> +  unsigned int context, covered;
>  
>    uiout = current_uiout;
>    uiout_cleanup = make_cleanup_ui_out_tuple_begin_end (uiout,
>  						       "insn history");
> -  btinfo = require_btrace ();
> -  last = VEC_length (btrace_func_s, btinfo->ftrace);
> -
>    context = abs (size);
> -  begin = btinfo->func_iterator.begin;
> -  end = btinfo->func_iterator.end;
> -
> -  DEBUG ("func-history (0x%x): %d, prev: [%u; %u[", flags, size, begin, end);
> -
>    if (context == 0)
>      error (_("Bad record function-call-history-size."));
>  
> -  /* We start at the end.  */
> -  if (end < begin)
> -    {
> -      /* Truncate the context, if necessary.  */
> -      context = min (context, last);
> -
> -      end = last;
> -      begin = end - context;
> -    }
> -  else if (size < 0)
> +  btinfo = require_btrace ();
> +  history = btinfo->call_history;
> +  if (history == NULL)
>      {
> -      if (begin == 0)
> -	{
> -	  printf_unfiltered (_("At the start of the branch trace record.\n"));
> -
> -	  btinfo->func_iterator.end = 0;
> -	  return;
> -	}
> +      /* No matter the direction, we start with the tail of the trace.  */
> +      btrace_call_end (&begin, btinfo);
> +      end = begin;
>  
> -      /* Truncate the context, if necessary.  */
> -      context = min (context, begin);
> +      DEBUG ("call-history (0x%x): %d", flags, size);
>  
> -      end = begin;
> -      begin -= context;
> +      covered = btrace_call_prev (&begin, context);
>      }
>    else
>      {
> -      if (end == last)
> -	{
> -	  printf_unfiltered (_("At the end of the branch trace record.\n"));
> +      begin = history->begin;
> +      end = history->end;
>  
> -	  btinfo->func_iterator.begin = last;
> -	  return;
> -	}
> +      DEBUG ("call-history (0x%x): %d, prev: [%u; %u)", flags, size,
> +	     btrace_call_number (&begin), btrace_call_number (&end));
>  
> -      /* Truncate the context, if necessary.  */
> -      context = min (context, last - end);
> -
> -      begin = end;
> -      end += context;
> +      if (size < 0)
> +	{
> +	  end = begin;
> +	  covered = btrace_call_prev (&begin, context);
> +	}
> +      else
> +	{
> +	  begin = end;
> +	  covered = btrace_call_next (&end, context);
> +	}
>      }
>  
> -  btrace_func_history (btinfo, uiout, begin, end, flags);
> -
> -  btinfo->func_iterator.begin = begin;
> -  btinfo->func_iterator.end = end;
> +  if (covered > 0)
> +    btrace_call_history (uiout, &begin, &end, flags);
> +  else
> +    {
> +      if (size < 0)
> +	printf_unfiltered (_("At the start of the branch trace record.\n"));
> +      else
> +	printf_unfiltered (_("At the end of the branch trace record.\n"));
> +    }
>  
> +  btrace_set_call_history (btinfo, &begin, &end);
>    do_cleanups (uiout_cleanup);
>  }
>  
> @@ -570,39 +586,41 @@ static void
>  record_btrace_call_history_range (ULONGEST from, ULONGEST to, int flags)
>  {
>    struct btrace_thread_info *btinfo;
> +  struct btrace_call_history *history;
> +  struct btrace_call_iterator begin, end;
>    struct cleanup *uiout_cleanup;
>    struct ui_out *uiout;
> -  unsigned int last, begin, end;
> +  unsigned int low, high;
> +  int found;
>  
>    uiout = current_uiout;
>    uiout_cleanup = make_cleanup_ui_out_tuple_begin_end (uiout,
>  						       "func history");
> -  btinfo = require_btrace ();
> -  last = VEC_length (btrace_func_s, btinfo->ftrace);
> +  low = (unsigned int) from;
> +  high = (unsigned int) to;

I do not see a reason for this cast, it is even not signed vs. unsigned.


>  
> -  begin = (unsigned int) from;
> -  end = (unsigned int) to;
> -
> -  DEBUG ("func-history (0x%x): [%u; %u[", flags, begin, end);
> +  DEBUG ("call-history (0x%x): [%u; %u[", flags, low, high);

You use recently: [%u; %u)


>  
>    /* Check for wrap-arounds.  */
> -  if (begin != from || end != to)
> +  if (low != from || high != to)
>      error (_("Bad range."));
>  
> -  if (end <= begin)
> +  if (high <= low)
>      error (_("Bad range."));

Similar inclusive/exclusive question as in record_btrace_insn_history_range and
to_insn_history_range.

      /* We want both begin and end to be inclusive.  */
      btrace_call_next (&end, 1);

(gdb) record function-call-history 700,701
700	_dl_lookup_symbol_x
701	_dl_fixup
(gdb) record function-call-history 700,700
Bad range.


>  
> -  if (last <= begin)
> -    error (_("Range out of bounds."));
> +  btinfo = require_btrace ();
>  
> -  /* Truncate the range, if necessary.  */
> -  if (last < end)
> -    end = last;
> +  found = btrace_find_call_by_number (&begin, btinfo, low);
> +  if (found == 0)
> +    error (_("Range out of bounds."));
>  
> -  btrace_func_history (btinfo, uiout, begin, end, flags);
> +  /* Silently truncate the range, if necessary.  */
> +  found = btrace_find_call_by_number (&end, btinfo, high);
> +  if (found == 0)
> +    btrace_call_end (&end, btinfo);
>  
> -  btinfo->func_iterator.begin = begin;
> -  btinfo->func_iterator.end = end;
> +  btrace_call_history (uiout, &begin, &end, flags);
> +  btrace_set_call_history (btinfo, &begin, &end);
>  
>    do_cleanups (uiout_cleanup);
>  }
> diff --git a/gdb/testsuite/gdb.btrace/function_call_history.exp b/gdb/testsuite/gdb.btrace/function_call_history.exp
> index 97447e1..7658637 100644
> --- a/gdb/testsuite/gdb.btrace/function_call_history.exp
> +++ b/gdb/testsuite/gdb.btrace/function_call_history.exp
> @@ -204,16 +204,18 @@ set bp_location [gdb_get_line_number "bp.2" $testfile.c]
>  gdb_breakpoint $bp_location
>  gdb_continue_to_breakpoint "cont to $bp_location" ".*$testfile.c:$bp_location.*"
>  
> -# at this point we expect to have main, fib, ..., fib, main, where fib occurs 8 times,
> -# so we limit the output to only show the latest 10 function calls
> -gdb_test_no_output "set record function-call-history-size 10"
> -set message "show recursive function call history"
> -gdb_test_multiple "record function-call-history" $message {
> -    -re "13\tmain\r\n14\tfib\r\n15\tfib\r\n16\tfib\r\n17\tfib\r\n18\tfib\r\n19\tfib\r\n20\tfib\r\n21\tfib\r\n22	 main\r\n$gdb_prompt $" {
> -        pass $message
> -    }
> -    -re "13\tinc\r\n14\tmain\r\n15\tinc\r\n16\tmain\r\n17\tinc\r\n18\tmain\r\n19\tinc\r\n20\tmain\r\n21\tfib\r\n22\tmain\r\n$gdb_prompt $" {
> -        # recursive function calls appear only as 1 call
> -        kfail "gdb/15240" $message
> -    }
> -}
> +# at this point we expect to have main, fib, ..., fib, main, where fib occurs 9 times,
> +# so we limit the output to only show the latest 11 function calls
> +gdb_test_no_output "set record function-call-history-size 11"
> +gdb_test "record function-call-history" "
> +20\tmain\r
> +21\tfib\r
> +22\tfib\r
> +23\tfib\r
> +24\tfib\r
> +25\tfib\r
> +26\tfib\r
> +27\tfib\r
> +28\tfib\r
> +29\tfib\r
> +30\tmain" "show recursive function call history"
> diff --git a/gdb/testsuite/gdb.btrace/instruction_history.exp b/gdb/testsuite/gdb.btrace/instruction_history.exp
> index c1a61b7..bd25404 100644
> --- a/gdb/testsuite/gdb.btrace/instruction_history.exp
> +++ b/gdb/testsuite/gdb.btrace/instruction_history.exp
> @@ -56,9 +56,9 @@ gdb_test_multiple "info record" $testname {
>      }
>  }
>  
> -# we have exactly 7 instructions here
> -set message "exactly 7 instructions"
> -if { $traced != 7 } {
> +# we have exactly 6 instructions here
> +set message "exactly 6 instructions"
> +if { $traced != 6 } {
>      fail $message
>  } else {
>      pass $message
> @@ -144,6 +144,8 @@ if { $lines != $history_size } {
>      pass $message
>  }
>  
> +set history_size 2
> +gdb_test_no_output "set record instruction-history-size $history_size"
>  set message "browse history forward middle"
>  set lines [test_lines_length "record instruction-history +" $message]
>  if { $lines != $history_size } {
> @@ -165,6 +167,8 @@ gdb_test "record instruction-history" "At the end of the branch trace record\\."
>  # make sure we cannot move further
>  gdb_test "record instruction-history" "At the end of the branch trace record\\." "browse history forward beyond 2"
>  
> +set history_size 3
> +gdb_test_no_output "set record instruction-history-size $history_size"
>  set message "browse history backward last"
>  set lines [test_lines_length "record instruction-history -" $message]
>  if { $lines != $history_size } {
> @@ -173,6 +177,8 @@ if { $lines != $history_size } {
>      pass $message
>  }
>  
> +set history_size 2
> +gdb_test_no_output "set record instruction-history-size $history_size"
>  set message "browse history backward middle"
>  set lines [test_lines_length "record instruction-history -" $message]
>  if { $lines != $history_size } {
> -- 
> 1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 04/24] record-btrace: fix insn range in function call history
  2013-07-03  9:15 ` [patch v4 04/24] record-btrace: fix insn range in function call history Markus Metzger
@ 2013-08-18 19:06   ` Jan Kratochvil
  0 siblings, 0 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:06 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches

On Wed, 03 Jul 2013 11:14:14 +0200, Markus Metzger wrote:
> With the "/i" modifier, we print the instruction number range in the
> "record function-call-history" command as [begin, end).
> 
> It would be more intuitive if we printed the range as [begin, end].
> 
> 2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>
> 
> 	* record-btrace.c (btrace_call_history_insn_range): Print
> 	insn range as [begin, end].
> 
> 
> ---
>  gdb/record-btrace.c |    8 ++++++--
>  1 files changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
> index 2e7c639..d9a2ba7 100644
> --- a/gdb/record-btrace.c
> +++ b/gdb/record-btrace.c
> @@ -425,10 +425,14 @@ static void
>  btrace_call_history_insn_range (struct ui_out *uiout,
>  				const struct btrace_function *bfun)
>  {
> -  unsigned int begin, end;
> +  unsigned int begin, end, size;
> +
> +  size = VEC_length (btrace_insn_s, bfun->insn);
> +  if (size == 0)
> +    return;

The current code seems to assert INSN cannot be empty.
So 'if (size == 0)' should be replaced by gdb_assert or removed at all.


>  
>    begin = bfun->insn_offset;
> -  end = begin + VEC_length (btrace_insn_s, bfun->insn);
> +  end = begin + size - 1;
>  
>    ui_out_field_uint (uiout, "insn begin", begin);
>    ui_out_text (uiout, "-");
> -- 
> 1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 06/24] btrace: increase buffer size
  2013-07-03  9:15 ` [patch v4 06/24] btrace: increase buffer size Markus Metzger
@ 2013-08-18 19:06   ` Jan Kratochvil
  0 siblings, 0 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:06 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches

On Wed, 03 Jul 2013 11:14:16 +0200, Markus Metzger wrote:
> Try to allocate as much buffer as we can for each thread with a maximum
> of 4MB.
> 
> 2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>
> 
> 	* common/linux-btrace.c (linux_enable_btrace): Increase buffer.

Increase buffer size or Enlarge buffer.


> 
> 
> ---
>  gdb/common/linux-btrace.c |   25 +++++++++++++++----------
>  1 files changed, 15 insertions(+), 10 deletions(-)
> 
> diff --git a/gdb/common/linux-btrace.c b/gdb/common/linux-btrace.c
> index b874c84..4880f41 100644
> --- a/gdb/common/linux-btrace.c
> +++ b/gdb/common/linux-btrace.c
> @@ -420,7 +420,7 @@ struct btrace_target_info *
>  linux_enable_btrace (ptid_t ptid)
>  {
>    struct btrace_target_info *tinfo;
> -  int pid;
> +  int pid, pg;
>  
>    tinfo = xzalloc (sizeof (*tinfo));
>    tinfo->ptid = ptid;
> @@ -448,17 +448,22 @@ linux_enable_btrace (ptid_t ptid)
>    if (tinfo->file < 0)
>      goto err;
>  
> -  /* We hard-code the trace buffer size.
> -     At some later time, we should make this configurable.  */
> -  tinfo->size = 1;
> -  tinfo->buffer = mmap (NULL, perf_event_mmap_size (tinfo),
> -			PROT_READ, MAP_SHARED, tinfo->file, 0);
> -  if (tinfo->buffer == MAP_FAILED)
> -    goto err_file;
> +  /* We try to allocate as much buffer as we can get.
> +     We could allow the user to specify the size of the buffer, but then
> +     we'd leave this search for the maximum buffer size to him.  */
> +  for (pg = 10; pg >= 0; --pg)
> +    {
> +      /* The number of pages we request needs to be a power of two.  */
> +      tinfo->size = 1 << pg;
> +      tinfo->buffer = mmap (NULL, perf_event_mmap_size (tinfo),
> +			    PROT_READ, MAP_SHARED, tinfo->file, 0);
> +      if (tinfo->buffer == MAP_FAILED)
> +	continue;
>  
> -  return tinfo;
> +      return tinfo;
> +    }
>  
> - err_file:
> +  /* We were not able to allocate any buffer.  */
>    close (tinfo->file);
>  
>   err:
> -- 
> 1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 07/24] record-btrace: optionally indent function call history
  2013-07-03  9:14 ` [patch v4 07/24] record-btrace: optionally indent function call history Markus Metzger
@ 2013-08-18 19:06   ` Jan Kratochvil
  2013-09-10 13:06     ` Metzger, Markus T
  0 siblings, 1 reply; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:06 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches, Christian Himpel

On Wed, 03 Jul 2013 11:14:17 +0200, Markus Metzger wrote:
> Add a new modifier /c to the "record function-call-history" command to
> indent the function name based on its depth in the call stack.
> 
> Also reorder the optional fields to have the indentation at the very beginning.
> Prefix the insn range (/i modifier) with "inst ".
> Prefix the source line (/l modifier) with "at ".
> Change the range syntax from "begin-end" to "begin,end" to allow copy&paste to
> the "record instruction-history" and "list" commands.
> 
> Adjust the respective tests and add new tests for the /c modifier.
> 
> There is one known bug regarding indentation that results from the fact that we
> have the current instruction already inside the branch trace.  When the current
> instruction is the first (and only) instruction in a function on the outermost
> level for which we have not seen the call, the indentation starts at level 1
> with 2 leading spaces.
> 
> Reviewed-by: Eli Zaretskii  <eliz@gnu.org>
> CC: Christian Himpel  <christian.himpel@intel.com>
> 2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>
> 
>     * record.h (enum record_print_flag)
>     <record_print_indent_calls>: New.
>     * record.c (get_call_history_modifiers): Recognize /c modifier.
>     (_initialize_record): Document /c modifier.
>     * record-btrace.c (btrace_call_history): Add btinfo parameter.
>     Reorder fields.  Optionally indent the function name.  Update
>     all users.
>     * NEWS: Announce changes.
> 
> testsuite/
>     * gdb.btrace/function_call_history.exp: Fix expected field
>     order for "record function-call-history".
>     Add new tests for "record function-call-history /c".
>     * gdb.btrace/exception.cc: New.
>     * gdb.btrace/exception.exp: New.
>     * gdb.btrace/tailcall.exp: New.
>     * gdb.btrace/x86-tailcall.S: New.
>     * gdb.btrace/x86-tailcall.c: New.
>     * gdb.btrace/unknown_functions.c: New.
>     * gdb.btrace/unknown_functions.exp: New.
>     * gdb.btrace/Makefile.in (EXECUTABLES): Add new.
> 
> doc/
>     * gdb.texinfo (Process Record and Replay): Document new /c
>     modifier accepted by "record function-call-history".
> 
> 
> ---
>  gdb/NEWS                                           |    6 +
>  gdb/doc/gdb.texinfo                                |   12 +-
>  gdb/record-btrace.c                                |   33 ++-
>  gdb/record.c                                       |    4 +
>  gdb/record.h                                       |    3 +
>  gdb/testsuite/gdb.btrace/Makefile.in               |    3 +-
>  gdb/testsuite/gdb.btrace/exception.cc              |   56 ++++
>  gdb/testsuite/gdb.btrace/exception.exp             |   65 +++++
>  gdb/testsuite/gdb.btrace/function_call_history.exp |  112 +++++++--
>  gdb/testsuite/gdb.btrace/tailcall.exp              |   49 ++++
>  gdb/testsuite/gdb.btrace/unknown_functions.c       |   45 ++++
>  gdb/testsuite/gdb.btrace/unknown_functions.exp     |   58 +++++
>  gdb/testsuite/gdb.btrace/x86-tailcall.S            |  269 ++++++++++++++++++++
>  gdb/testsuite/gdb.btrace/x86-tailcall.c            |   39 +++
>  14 files changed, 716 insertions(+), 38 deletions(-)
>  create mode 100644 gdb/testsuite/gdb.btrace/exception.cc
>  create mode 100755 gdb/testsuite/gdb.btrace/exception.exp
>  create mode 100644 gdb/testsuite/gdb.btrace/tailcall.exp
>  create mode 100644 gdb/testsuite/gdb.btrace/unknown_functions.c
>  create mode 100644 gdb/testsuite/gdb.btrace/unknown_functions.exp
>  create mode 100644 gdb/testsuite/gdb.btrace/x86-tailcall.S
>  create mode 100644 gdb/testsuite/gdb.btrace/x86-tailcall.c
> 
> diff --git a/gdb/NEWS b/gdb/NEWS
> index e469f1e..6ac910a 100644
> --- a/gdb/NEWS
> +++ b/gdb/NEWS
> @@ -13,6 +13,12 @@ Nios II ELF 			nios2*-*-elf
>  Nios II GNU/Linux		nios2*-*-linux
>  Texas Instruments MSP430	msp430*-*-elf
>  
> +* The command 'record function-call-history' supports a new modifier '/c' to
> +  indent the function names based on their call stack depth.
> +  The fields for the '/i' and '/l' modifier have been reordered.
> +  The instruction range is now prefixed with 'insn'.
> +  The source line range is now prefixed with 'at'.
> +
>  * New commands:
>  catch rethrow
>    Like "catch throw", but catches a re-thrown exception.
> diff --git a/gdb/doc/gdb.texinfo b/gdb/doc/gdb.texinfo
> index fae54e4..2cfc20b 100644
> --- a/gdb/doc/gdb.texinfo
> +++ b/gdb/doc/gdb.texinfo
> @@ -6419,7 +6419,9 @@ line for each sequence of instructions that belong to the same
>  function giving the name of that function, the source lines
>  for this instruction sequence (if the @code{/l} modifier is
>  specified), and the instructions numbers that form the sequence (if
> -the @code{/i} modifier is specified).
> +the @code{/i} modifier is specified).  The function names are indented
> +to reflect the call stack depth if the @code{/c} modifier is
> +specified.
>  
>  @smallexample
>  (@value{GDBP}) @b{list 1, 10}
> @@ -6433,10 +6435,10 @@ the @code{/i} modifier is specified).
>  8     foo ();
>  9     ...
>  10  @}
> -(@value{GDBP}) @b{record function-call-history /l}
> -1  foo.c:6-8   bar
> -2  foo.c:2-3   foo
> -3  foo.c:9-10  bar
> +(@value{GDBP}) @b{record function-call-history /lc}
> +1  bar     at foo.c:6,8
> +2    foo   at foo.c:2,3
> +3  bar     at foo.c:9,10
>  @end smallexample
>  
>  By default, ten lines are printed.  This can be changed using the
> diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
> index df69a41..99dc046 100644
> --- a/gdb/record-btrace.c
> +++ b/gdb/record-btrace.c
> @@ -435,7 +435,7 @@ btrace_call_history_insn_range (struct ui_out *uiout,
>    end = begin + size - 1;
>  
>    ui_out_field_uint (uiout, "insn begin", begin);
> -  ui_out_text (uiout, "-");
> +  ui_out_text (uiout, ",");
>    ui_out_field_uint (uiout, "insn end", end);
>  }
>  
> @@ -467,7 +467,7 @@ btrace_call_history_src_line (struct ui_out *uiout,
>    if (end == begin)
>      return;
>  
> -  ui_out_text (uiout, "-");
> +  ui_out_text (uiout, ",");
>    ui_out_field_int (uiout, "max line", end);
>  }
>  
> @@ -475,6 +475,7 @@ btrace_call_history_src_line (struct ui_out *uiout,
>  
>  static void
>  btrace_call_history (struct ui_out *uiout,
> +		     const struct btrace_thread_info *btinfo,
>  		     const struct btrace_call_iterator *begin,
>  		     const struct btrace_call_iterator *end,
>  		     enum record_print_flag flags)
> @@ -498,23 +499,33 @@ btrace_call_history (struct ui_out *uiout,
>        ui_out_field_uint (uiout, "index", bfun->number);
>        ui_out_text (uiout, "\t");
>  
> +      if ((flags & RECORD_PRINT_INDENT_CALLS) != 0)
> +	{
> +	  int level = bfun->level + btinfo->level, i;
> +
> +	  for (i = 0; i < level; ++i)
> +	    ui_out_text (uiout, "  ");
> +	}
> +
> +      if (sym != NULL)
> +	ui_out_field_string (uiout, "function", SYMBOL_PRINT_NAME (sym));
> +      else if (msym != NULL)
> +	ui_out_field_string (uiout, "function", SYMBOL_PRINT_NAME (msym));
> +      else
> +	ui_out_field_string (uiout, "function", "<unknown>");

Here should be _("<unknown>").  (BTW I do not know about any existing
localized message catalogs for GDB.)

_() would be inappropriate for MI but in such case there should be IMO anyway
rather:

  else if (!ui_out_is_mi_like_p (uiout))
    ui_out_field_string (uiout, "function", _("<unknown>"));

But there is currently no MI interface setup for these commands (although you
have nicely prepared the commands for MI) so I do not find it worth the time
to discuss MI issues now.


> +
>        if ((flags & RECORD_PRINT_INSN_RANGE) != 0)
>  	{
> +	  ui_out_text (uiout, "\tinst ");
>  	  btrace_call_history_insn_range (uiout, bfun);
> -	  ui_out_text (uiout, "\t");
>  	}
>  
>        if ((flags & RECORD_PRINT_SRC_LINE) != 0)
>  	{
> +	  ui_out_text (uiout, "\tat ");
>  	  btrace_call_history_src_line (uiout, bfun);
> -	  ui_out_text (uiout, "\t");
>  	}
>  
> -      if (sym != NULL)
> -	ui_out_field_string (uiout, "function", SYMBOL_PRINT_NAME (sym));
> -      else if (msym != NULL)
> -	ui_out_field_string (uiout, "function", SYMBOL_PRINT_NAME (msym));
> -
>        ui_out_text (uiout, "\n");
>      }
>  }
> @@ -571,7 +582,7 @@ record_btrace_call_history (int size, int flags)
>      }
>  
>    if (covered > 0)
> -    btrace_call_history (uiout, &begin, &end, flags);
> +    btrace_call_history (uiout, btinfo, &begin, &end, flags);
>    else
>      {
>        if (size < 0)
> @@ -623,7 +634,7 @@ record_btrace_call_history_range (ULONGEST from, ULONGEST to, int flags)
>    if (found == 0)
>      btrace_call_end (&end, btinfo);
>  
> -  btrace_call_history (uiout, &begin, &end, flags);
> +  btrace_call_history (uiout, btinfo, &begin, &end, flags);
>    btrace_set_call_history (btinfo, &begin, &end);
>  
>    do_cleanups (uiout_cleanup);
> diff --git a/gdb/record.c b/gdb/record.c
> index 07b1b97..ffe9810 100644
> --- a/gdb/record.c
> +++ b/gdb/record.c
> @@ -575,6 +575,9 @@ get_call_history_modifiers (char **arg)
>  	    case 'i':
>  	      modifiers |= RECORD_PRINT_INSN_RANGE;
>  	      break;
> +	    case 'c':
> +	      modifiers |= RECORD_PRINT_INDENT_CALLS;
> +	      break;
>  	    default:
>  	      error (_("Invalid modifier: %c."), *args);
>  	    }
> @@ -809,6 +812,7 @@ function.\n\
>  Without modifiers, it prints the function name.\n\
>  With a /l modifier, the source file and line number range is included.\n\
>  With a /i modifier, the instruction number range is included.\n\
> +With a /c modifier, the output is indented based on the call stack depth.\n\
>  With no argument, prints ten more lines after the previous ten-line print.\n\
>  \"record function-call-history -\" prints ten lines before a previous ten-line \
>  print.\n\
> diff --git a/gdb/record.h b/gdb/record.h
> index 65d508f..9acc7de 100644
> --- a/gdb/record.h
> +++ b/gdb/record.h
> @@ -40,6 +40,9 @@ enum record_print_flag
>  
>    /* Print the instruction number range (if applicable).  */
>    RECORD_PRINT_INSN_RANGE = (1 << 1),
> +
> +  /* Indent based on call stack depth (if applicable).  */
> +  RECORD_PRINT_INDENT_CALLS = (1 << 2)
>  };
>  
>  /* Wrapper for target_read_memory that prints a debug message if
> diff --git a/gdb/testsuite/gdb.btrace/Makefile.in b/gdb/testsuite/gdb.btrace/Makefile.in
> index f4c06d1..5c70700 100644
> --- a/gdb/testsuite/gdb.btrace/Makefile.in
> +++ b/gdb/testsuite/gdb.btrace/Makefile.in
> @@ -1,7 +1,8 @@
>  VPATH = @srcdir@
>  srcdir = @srcdir@
>  
> -EXECUTABLES   = enable function_call_history instruction_history
> +EXECUTABLES   = enable function_call_history instruction_history tailcall \
> +  exception
>  
>  MISCELLANEOUS =
>  
> diff --git a/gdb/testsuite/gdb.btrace/exception.cc b/gdb/testsuite/gdb.btrace/exception.cc
> new file mode 100644
> index 0000000..029a4bc
> --- /dev/null
> +++ b/gdb/testsuite/gdb.btrace/exception.cc
> @@ -0,0 +1,56 @@
> +/* This testcase is part of GDB, the GNU debugger.
> +
> +   Copyright 2013 Free Software Foundation, Inc.
> +
> +   Contributed by Intel Corp. <markus.t.metzger@intel.com>
> +
> +   This program is free software; you can redistribute it and/or modify
> +   it under the terms of the GNU General Public License as published by
> +   the Free Software Foundation; either version 3 of the License, or
> +   (at your option) any later version.
> +
> +   This program is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +   GNU General Public License for more details.
> +
> +   You should have received a copy of the GNU General Public License
> +   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
> +
> +static void
> +bad (void)
> +{
> +  throw 42;
> +}
> +
> +static void
> +bar (void)
> +{
> +  bad ();
> +}
> +
> +static void
> +foo (void)
> +{
> +  bar ();
> +}
> +
> +static void
> +test (void)
> +{
> +  try
> +    {
> +      foo ();
> +    }
> +  catch (...)
> +    {
> +    }
> +}
> +
> +int
> +main (void)
> +{
> +  test ();
> +  test (); /* bp.1  */
> +  return 0; /* bp.2  */
> +}
> diff --git a/gdb/testsuite/gdb.btrace/exception.exp b/gdb/testsuite/gdb.btrace/exception.exp
> new file mode 100755
> index 0000000..77a07fd
> --- /dev/null
> +++ b/gdb/testsuite/gdb.btrace/exception.exp
> @@ -0,0 +1,65 @@
> +# This testcase is part of GDB, the GNU debugger.
> +#
> +# Copyright 2013 Free Software Foundation, Inc.
> +#
> +# Contributed by Intel Corp. <markus.t.metzger@intel.com>
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +
> +# check for btrace support
> +if { [skip_btrace_tests] } { return -1 }
> +
> +# start inferior
> +standard_testfile exception.cc
> +if [prepare_for_testing $testfile.exp $testfile $srcfile {c++ debug}] {
> +    return -1
> +}
> +if ![runto_main] {
> +    return -1
> +}
> +
> +# we want to see the full trace for this test
> +gdb_test_no_output "set record function-call-history-size 0"
> +
> +# set bp
> +set bp_1 [gdb_get_line_number "bp.1" $srcfile]
> +set bp_2 [gdb_get_line_number "bp.2" $srcfile]
> +gdb_breakpoint $bp_1
> +gdb_breakpoint $bp_2
> +
> +# trace the code between thw two breakpoints
> +gdb_continue_to_breakpoint "cont to $bp_1" ".*$srcfile:$bp_1.*"

gdb_continue_to_breakpoint "cont to bp_1" ".*$srcfile:$bp_1\r\n.*"
 * Using line numbers in messages is not too great, the line numbers
   occasionally change and it is causing needless gdb.sum differences.
   Moreover the message does not say much etc.
 * Original "$bp_1.*" can have false positive match (line 5 will match
   incrorect line 50 etc.).


> +gdb_test_no_output "record btrace"
> +gdb_continue_to_breakpoint "cont to $bp_2" ".*$srcfile:$bp_2.*"

Likewise.


> +
> +# show the flat branch trace
> +send_gdb "record function-call-history 1\n"
> +gdb_expect_list "exception - flat" "\r\n$gdb_prompt $" {"\r
> +1\ttest\\(\\)\r
> +2\tfoo\\(\\)\r
> +3\tbar\\(\\)\r
> +4\tbad\\(\\)\r" "\r
> +\[0-9\]*\ttest\\(\\)"}
> +
> +# show the branch trace with calls indented
> +#
> +# here we see a known bug that the indentation starts at level 1 with
> +# two leading spaces instead of level 0 without leading spaces.
> +send_gdb "record function-call-history /c 1\n"
> +gdb_expect_list "exception - calls indented" "\r\n$gdb_prompt $" {"\r
> +1\t  test\\(\\)\r
> +2\t    foo\\(\\)\r
> +3\t      bar\\(\\)\r
> +4\t        bad\\(\\)\r" "\r
> +\[0-9\]*\t  test\\(\\)"}
> diff --git a/gdb/testsuite/gdb.btrace/function_call_history.exp b/gdb/testsuite/gdb.btrace/function_call_history.exp
> index d694d5c..754cbbe 100644
> --- a/gdb/testsuite/gdb.btrace/function_call_history.exp
> +++ b/gdb/testsuite/gdb.btrace/function_call_history.exp
> @@ -62,6 +62,30 @@ gdb_test "record function-call-history" "
>  20\tinc\r
>  21\tmain\r" "record function-call-history - with size unlimited"
>  
> +# show indented function call history with unlimited size
> +gdb_test "record function-call-history /c 1" "
> +1\tmain\r
> +2\t  inc\r
> +3\tmain\r
> +4\t  inc\r
> +5\tmain\r
> +6\t  inc\r
> +7\tmain\r
> +8\t  inc\r
> +9\tmain\r
> +10\t  inc\r
> +11\tmain\r
> +12\t  inc\r
> +13\tmain\r
> +14\t  inc\r
> +15\tmain\r
> +16\t  inc\r
> +17\tmain\r
> +18\t  inc\r
> +19\tmain\r
> +20\t  inc\r
> +21\tmain\r" "indented record function-call-history - with size unlimited"
> +
>  # show function call history with size of 21, we expect to see all 21 entries
>  gdb_test_no_output "set record function-call-history-size 21"
>  # show function call history
> @@ -155,32 +179,35 @@ gdb_test "record function-call-history -" "At the start of the branch trace reco
>  # make sure we cannot move any further back
>  gdb_test "record function-call-history -" "At the start of the branch trace record\\." "record function-call-history - at the start (2)"
>  
> +# don't mess around with path names
> +gdb_test_no_output "set filename-display basename"
> +
>  # moving forward again, but this time with file and line number, expected to see the first 15 entries
>  gdb_test "record function-call-history /l +" "
> -.*$srcfile:40-41\tmain\r
> -.*$srcfile:22-24\tinc\r
> -.*$srcfile:40-41\tmain\r
> -.*$srcfile:22-24\tinc\r
> -.*$srcfile:40-41\tmain\r
> -.*$srcfile:22-24\tinc\r
> -.*$srcfile:40-41\tmain\r
> -.*$srcfile:22-24\tinc\r
> -.*$srcfile:40-41\tmain\r
> -.*$srcfile:22-24\tinc\r
> -.*$srcfile:40-41\tmain\r
> -.*$srcfile:22-24\tinc\r
> -.*$srcfile:40-41\tmain\r
> -.*$srcfile:22-24\tinc\r
> -.*$srcfile:40-41\tmain\r" "record function-call-history /l - show first 15 entries"
> +\[0-9\]*\tmain\tat $srcfile:40,41\r
> +\[0-9\]*\tinc\tat $srcfile:22,24\r
> +\[0-9\]*\tmain\tat $srcfile:40,41\r
> +\[0-9\]*\tinc\tat $srcfile:22,24\r
> +\[0-9\]*\tmain\tat $srcfile:40,41\r
> +\[0-9\]*\tinc\tat $srcfile:22,24\r
> +\[0-9\]*\tmain\tat $srcfile:40,41\r
> +\[0-9\]*\tinc\tat $srcfile:22,24\r
> +\[0-9\]*\tmain\tat $srcfile:40,41\r
> +\[0-9\]*\tinc\tat $srcfile:22,24\r
> +\[0-9\]*\tmain\tat $srcfile:40,41\r
> +\[0-9\]*\tinc\tat $srcfile:22,24\r
> +\[0-9\]*\tmain\tat $srcfile:40,41\r
> +\[0-9\]*\tinc\tat $srcfile:22,24\r
> +\[0-9\]*\tmain\tat $srcfile:40,41\r" "record function-call-history /l - show first 15 entries"
>  
>  # moving forward and expect to see the latest 6 entries
>  gdb_test "record function-call-history /l +" "
> -.*$srcfile:22-24\tinc\r
> -.*$srcfile:40-41\tmain\r
> -.*$srcfile:22-24\tinc\r
> -.*$srcfile:40-41\tmain\r
> -.*$srcfile:22-24\tinc\r
> -.*$srcfile:40-43\tmain\r" "record function-call-history /l - show last 6 entries"
> +\[0-9\]*\tinc\tat $srcfile:22,24\r
> +\[0-9\]*\tmain\tat $srcfile:40,41\r
> +\[0-9\]*\tinc\tat $srcfile:22,24\r
> +\[0-9\]*\tmain\tat $srcfile:40,41\r
> +\[0-9\]*\tinc\tat $srcfile:22,24\r
> +\[0-9\]*\tmain\tat $srcfile:40,43\r" "record function-call-history /l - show last 6 entries"
>  
>  # moving further forward shouldn't work
>  gdb_test "record function-call-history /l +" "At the end of the branch trace record\\." "record function-call-history /l - at the end (1)"
> @@ -219,3 +246,46 @@ gdb_test "record function-call-history" "
>  29\tfib\r
>  30\tfib\r
>  31\tmain" "show recursive function call history"
> +
> +# show indented function call history for fib
> +gdb_test "record function-call-history /c 21, +11" "
> +21\tmain\r
> +22\t  fib\r
> +23\t    fib\r
> +24\t  fib\r
> +25\t    fib\r
> +26\t      fib\r
> +27\t    fib\r
> +28\t      fib\r
> +29\t    fib\r
> +30\t  fib\r
> +31\tmain" "indented record function-call-history - fib"
> +
> +# make sure we can handle incomplete trace with respect to indentation
> +if ![runto_main] {
> +    return -1
> +}
> +# navigate to the fib in line 24 above
> +gdb_breakpoint fib
> +gdb_continue_to_breakpoint "cont to fib.1"
> +gdb_continue_to_breakpoint "cont to fib.2"
> +gdb_continue_to_breakpoint "cont to fib.3"
> +gdb_continue_to_breakpoint "cont to fib.4"
> +
> +# start tracing
> +gdb_test_no_output "record btrace"
> +
> +# continue until line 30 above
> +delete_breakpoints
> +set bp_location [gdb_get_line_number "bp.2" $testfile.c]
> +gdb_breakpoint $bp_location
> +gdb_continue_to_breakpoint "cont to $bp_location" ".*$testfile.c:$bp_location.*"

Like above.


> +
> +# let's look at the trace. we expect to see the tail of the above listing.
> +gdb_test "record function-call-history /c" "
> +1\t      fib\r
> +2\t    fib\r
> +3\t      fib\r
> +4\t    fib\r
> +5\t  fib\r
> +6\tmain" "indented record function-call-history - fib"
> diff --git a/gdb/testsuite/gdb.btrace/tailcall.exp b/gdb/testsuite/gdb.btrace/tailcall.exp
> new file mode 100644
> index 0000000..cf9fdf3
> --- /dev/null
> +++ b/gdb/testsuite/gdb.btrace/tailcall.exp
> @@ -0,0 +1,49 @@
> +# This testcase is part of GDB, the GNU debugger.
> +#
> +# Copyright 2013 Free Software Foundation, Inc.
> +#
> +# Contributed by Intel Corp. <markus.t.metzger@intel.com>
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +
> +# check for btrace support
> +if { [skip_btrace_tests] } { return -1 }
> +
> +# start inferior
> +standard_testfile x86-tailcall.S
> +if [prepare_for_testing tailcall.exp $testfile $srcfile {c++ debug}] {
> +    return -1
> +}

This does not work for example on i386 host or on x86_64 host with:
	$ runtest CXX_FOR_TARGET="g++ -m32" gdb.btrace/tailcall.exp 
	Running ./gdb.btrace/tailcall.exp ...
	gdb compile failed, x86-tailcall.c: Assembler messages:
	x86-tailcall.c:100: Error: cannot represent relocation type BFD_RELOC_64
	...

You can look for example at gdb.arch/amd64-tailcall-noret.exp for:
 * how to skip the testcase if the target is not 64-bit x86_64.
 * (optional) how to provide COMPILE=1 convenience parameter for debugging to
   run the testcase from .c file with local compiler (sure nobody guarantees
   your compiler will produce compatible .s for the testcase but still I find
   it sometimes handy).


> +if ![runto_main] {
> +    return -1
> +}
> +
> +# we want to see the full trace for this test
> +gdb_test_no_output "set record function-call-history-size 0"
> +
> +# trace the call to foo
> +gdb_test_no_output "record btrace"
> +gdb_test "next"
> +
> +# show the flat branch trace
> +gdb_test "record function-call-history 1" "
> +1\tfoo\r
> +2\tbar\r
> +3\tmain" "tailcall - flat"
> +
> +# show the branch trace with calls indented
> +gdb_test "record function-call-history /c 1" "
> +1\t  foo\r
> +2\t    bar\r
> +3\tmain" "tailcall - calls indented"
> diff --git a/gdb/testsuite/gdb.btrace/unknown_functions.c b/gdb/testsuite/gdb.btrace/unknown_functions.c
> new file mode 100644
> index 0000000..178c3e9
> --- /dev/null
> +++ b/gdb/testsuite/gdb.btrace/unknown_functions.c
> @@ -0,0 +1,45 @@
> +/* This testcase is part of GDB, the GNU debugger.
> +
> +   Copyright 2013 Free Software Foundation, Inc.
> +
> +   Contributed by Intel Corp. <markus.t.metzger@intel.com>
> +
> +   This program is free software; you can redistribute it and/or modify
> +   it under the terms of the GNU General Public License as published by
> +   the Free Software Foundation; either version 3 of the License, or
> +   (at your option) any later version.
> +
> +   This program is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +   GNU General Public License for more details.
> +
> +   You should have received a copy of the GNU General Public License
> +   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
> +
> +static int foo (void);
> +
> +int test (void)
> +{
> +  return foo ();
> +}
> +
> +static int
> +bar (void)
> +{
> +  return 42;
> +}
> +
> +static int
> +foo (void)
> +{
> +  return bar ();
> +}
> +
> +int
> +main (void)
> +{
> +  test ();
> +  test ();
> +  return 0;
> +}
> diff --git a/gdb/testsuite/gdb.btrace/unknown_functions.exp b/gdb/testsuite/gdb.btrace/unknown_functions.exp
> new file mode 100644
> index 0000000..c7f33bf
> --- /dev/null
> +++ b/gdb/testsuite/gdb.btrace/unknown_functions.exp
> @@ -0,0 +1,58 @@
> +# This testcase is part of GDB, the GNU debugger.
> +#
> +# Copyright 2013 Free Software Foundation, Inc.
> +#
> +# Contributed by Intel Corp. <markus.t.metzger@intel.com>
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +
> +# check for btrace support
> +if { [skip_btrace_tests] } { return -1 }
> +
> +# start inferior
> +standard_testfile
> +
> +# discard local symbols
> +set ldflags "additional_flags=-Wl,-x"
> +if [prepare_for_testing $testfile.exp $testfile $srcfile $ldflags] {
> +    return -1
> +}
> +if ![runto test] {
> +    return -1
> +}
> +
> +# we want to see the full trace for this test
> +gdb_test_no_output "set record function-call-history-size 0"
> +
> +# trace from one call of test to the next
> +gdb_test_no_output "record btrace"
> +gdb_continue_to_breakpoint "cont to test" ".*test.*"
> +
> +# show the flat branch trace
> +gdb_test "record function-call-history 1" "
> +1\t<unknown>\r
> +2\t<unknown>\r
> +3\t<unknown>\r
> +4\ttest\r
> +5\tmain\r
> +6\ttest" "unknown - flat"
> +
> +# show the branch trace with calls indented
> +gdb_test "record function-call-history /c 1" "
> +1\t    <unknown>\r
> +2\t      <unknown>\r
> +3\t    <unknown>\r
> +4\t  test\r
> +5\tmain\r
> +6\t  test" "unknown - calls indented"
> diff --git a/gdb/testsuite/gdb.btrace/x86-tailcall.S b/gdb/testsuite/gdb.btrace/x86-tailcall.S
> new file mode 100644
> index 0000000..5a4fede
> --- /dev/null
> +++ b/gdb/testsuite/gdb.btrace/x86-tailcall.S
> @@ -0,0 +1,269 @@
> +/* This testcase is part of GDB, the GNU debugger.
> +
> +   Copyright 2013 Free Software Foundation, Inc.
> +
> +   Contributed by Intel Corp. <markus.t.metzger@intel.com>
> +
> +   This program is free software; you can redistribute it and/or modify
> +   it under the terms of the GNU General Public License as published by
> +   the Free Software Foundation; either version 3 of the License, or
> +   (at your option) any later version.
> +
> +   This program is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +   GNU General Public License for more details.
> +
> +   You should have received a copy of the GNU General Public License
> +   along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +
> +
> +   This file has been generated using:
> +   gcc -S -O2 -g x86-tailcall.c -o x86-tailcall.S  */

It is better to use also -dA if one needs to deal with the .S source.


> +
> +	.file	"x86-tailcall.c"
> +	.section	.debug_abbrev,"",@progbits
> +.Ldebug_abbrev0:
[...]
> +	.section	.debug_str,"MS",@progbits,1
> +.LASF1:
> +	.string	"gdb/testsuite/gdb.btrace/x86-tailcall.c"
> +.LASF4:
> +	.string	"answer"
> +.LASF0:
> +	.string	"GNU C 4.4.4 20100726 (Red Hat 4.4.4-13)"
> +.LASF3:
> +	.string	"main"
> +.LASF2:
> +	.string	"/users/mmetzger/gdb/gerrit/git"

You could replace the directory for example by (that is IIRC not correct
anyway for out-of-src-tree builds but it at least finds the source for
in-src-tree builds)::
	.string	""


> +	.ident	"GCC: (GNU) 4.4.4 20100726 (Red Hat 4.4.4-13)"
> +	.section	.note.GNU-stack,"",@progbits
> diff --git a/gdb/testsuite/gdb.btrace/x86-tailcall.c b/gdb/testsuite/gdb.btrace/x86-tailcall.c
> new file mode 100644
> index 0000000..9e3b183
> --- /dev/null
> +++ b/gdb/testsuite/gdb.btrace/x86-tailcall.c
> @@ -0,0 +1,39 @@
> +/* This testcase is part of GDB, the GNU debugger.
> +
> +   Copyright 2013 Free Software Foundation, Inc.
> +
> +   Contributed by Intel Corp. <markus.t.metzger@intel.com>
> +
> +   This program is free software; you can redistribute it and/or modify
> +   it under the terms of the GNU General Public License as published by
> +   the Free Software Foundation; either version 3 of the License, or
> +   (at your option) any later version.
> +
> +   This program is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +   GNU General Public License for more details.
> +
> +   You should have received a copy of the GNU General Public License
> +   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
> +
> +static __attribute__ ((noinline)) int
> +bar (void)
> +{
> +  return 42;
> +}
> +
> +static __attribute__ ((noinline)) int
> +foo (void)
> +{
> +  return bar ();
> +}
> +
> +int
> +main (void)
> +{
> +  int answer;
> +
> +  answer = foo ();
> +  return ++answer;
> +}
> -- 
> 1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 11/24] record-btrace: supply register target methods
  2013-07-03  9:14 ` [patch v4 11/24] record-btrace: supply register target methods Markus Metzger
@ 2013-08-18 19:07   ` Jan Kratochvil
  2013-09-16  9:19     ` Metzger, Markus T
  0 siblings, 1 reply; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:07 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches

On Wed, 03 Jul 2013 11:14:21 +0200, Markus Metzger wrote:
> Supply target methods to allow reading the PC.  Forbid anything else.
> 
> 2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>
> 
> 	* record-btrace.c (record_btrace_fetch_registers,
> 	record_btrace_store_registers,
> 	record_btrace_to_prepare_to_store): New.
> 	(init_record_btrace_ops): Add the above.
> 
> 
> ---
>  gdb/record-btrace.c |   95 +++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 files changed, 95 insertions(+), 0 deletions(-)
> 
> diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
> index 5e41b20..e9c0801 100644
> --- a/gdb/record-btrace.c
> +++ b/gdb/record-btrace.c
> @@ -32,6 +32,7 @@
>  #include "ui-out.h"
>  #include "symtab.h"
>  #include "filenames.h"
> +#include "regcache.h"
>  
>  /* The target_ops of record-btrace.  */
>  static struct target_ops record_btrace_ops;
> @@ -752,6 +753,97 @@ record_btrace_is_replaying (void)
>    return 0;
>  }
>  
> +/* The to_fetch_registers method of target record-btrace.  */
> +
> +static void
> +record_btrace_fetch_registers (struct target_ops *ops,
> +			       struct regcache *regcache, int regno)
> +{
> +  struct btrace_insn_iterator *replay;
> +  struct thread_info *tp;
> +
> +  tp = find_thread_ptid (inferior_ptid);
> +  if (tp == NULL)
> +    return;

Are you aware when it can happen?  If not then:
  gdb_assert (tp != NULL);


> +
> +  replay = tp->btrace.replay;
> +  if (replay != NULL)
> +    {
> +      const struct btrace_insn *insn;
> +      struct gdbarch *gdbarch;
> +      int pcreg;
> +
> +      gdbarch = get_regcache_arch (regcache);
> +      pcreg = gdbarch_pc_regnum (gdbarch);
> +      if (pcreg < 0)
> +	return;
> +
> +      /* We can only provide the PC register.  */
> +      if (regno >= 0 && regno != pcreg)
> +	return;
> +
> +      insn = btrace_insn_get (replay);
> +      if (insn == NULL)
> +	return;

Shouldn't here be rather an error?

> +
> +      regcache_raw_supply (regcache, regno, &insn->pc);
> +    }
> +  else
> +    {
> +      struct target_ops *t;
> +
> +      for (t = ops->beneath; t != NULL; t = t->beneath)
> +	if (t->to_fetch_registers != NULL)
> +	  {
> +	    t->to_fetch_registers (t, regcache, regno);
> +	    break;
> +	  }
> +    }
> +}
> +
> +/* The to_store_registers method of target record-btrace.  */
> +
> +static void
> +record_btrace_store_registers (struct target_ops *ops,
> +			       struct regcache *regcache, int regno)
> +{
> +  struct target_ops *t;
> +
> +  if (record_btrace_is_replaying ())
> +    return;

Currently I get:
	(gdb) p $rax
	$1 = <unavailable>
	(gdb) p $rax=1
	$2 = <unavailable>

I would find more appropriate an error() here so that we get:
	(gdb) p $rax
	$1 = <unavailable>
	(gdb) p $rax=1
	Some error message.

With gdbserver trace one gets:
	(gdb) print globalc
	$1 = <unavailable>
	(gdb) print globalc=1
	Cannot access memory at address 0x602120
which is not so convenient as it comes from gdbserver E01 response:
gdb_write_memory -> if (current_traceframe >= 0) return EIO;
as I checked.


> +
> +  if (may_write_registers == 0)
> +    error (_("Writing to registers is not allowed (regno %d)"), regno);

Here should be rather:
  gdb_assert (may_write_registers == 0);

as target_store_registers() would not pass the call here otherwise.


> +
> +  for (t = ops->beneath; t != NULL; t = t->beneath)
> +    if (t->to_store_registers != NULL)
> +      {
> +	t->to_store_registers (t, regcache, regno);
> +	return;
> +      }
> +
> +  noprocess ();
> +}
> +
> +/* The to_prepare_to_store method of target record-btrace.  */
> +
> +static void
> +record_btrace_prepare_to_store (struct target_ops *ops,
> +				struct regcache *regcache)
> +{
> +  struct target_ops *t;
> +
> +  if (record_btrace_is_replaying ())
> +    return;
> +
> +  for (t = ops->beneath; t != NULL; t = t->beneath)
> +    if (t->to_prepare_to_store != NULL)
> +      {
> +	t->to_prepare_to_store (t, regcache);
> +	return;
> +      }
> +}
> +
>  /* Initialize the record-btrace target ops.  */
>  
>  static void
> @@ -779,6 +871,9 @@ init_record_btrace_ops (void)
>    ops->to_call_history_from = record_btrace_call_history_from;
>    ops->to_call_history_range = record_btrace_call_history_range;
>    ops->to_record_is_replaying = record_btrace_is_replaying;
> +  ops->to_fetch_registers = record_btrace_fetch_registers;
> +  ops->to_store_registers = record_btrace_store_registers;
> +  ops->to_prepare_to_store = record_btrace_prepare_to_store;
>    ops->to_stratum = record_stratum;
>    ops->to_magic = OPS_MAGIC;
>  }
> -- 
> 1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 13/24] record-btrace, frame: supply target-specific unwinder
  2013-07-03  9:15 ` [patch v4 13/24] record-btrace, frame: supply target-specific unwinder Markus Metzger
@ 2013-08-18 19:07   ` Jan Kratochvil
  0 siblings, 0 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:07 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches

On Wed, 03 Jul 2013 11:14:23 +0200, Markus Metzger wrote:
> Supply a target-specific frame unwinder for the record-btrace target that does
> not allow unwinding while replaying.
> 
> 2013-02-11  Jan Kratochvil  <jan.kratochvil@redhat.com>
>             Markus Metzger  <markus.t.metzger@intel.com>
> 
> gdb/
> 	* record-btrace.c: Include frame-unwind.h.
> 	(record_btrace_frame_unwind_stop_reason,
> 	record_btrace_frame_this_id, record_btrace_frame_prev_register,
> 	record_btrace_frame_sniffer, record_btrace_frame_unwind):
> 	New.
> 	(init_record_btrace_ops): Install it.
> 
> 
> ---
>  gdb/record-btrace.c |   66 +++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 files changed, 66 insertions(+), 0 deletions(-)
> 
> diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
> index e9c0801..cb1f3bb 100644
> --- a/gdb/record-btrace.c
> +++ b/gdb/record-btrace.c
> @@ -33,6 +33,7 @@
>  #include "symtab.h"
>  #include "filenames.h"
>  #include "regcache.h"
> +#include "frame-unwind.h"
>  
>  /* The target_ops of record-btrace.  */
>  static struct target_ops record_btrace_ops;
> @@ -844,6 +845,70 @@ record_btrace_prepare_to_store (struct target_ops *ops,
>        }
>  }
>  
> +/* Implement stop_reason method for record_btrace_frame_unwind.  */
> +
> +static enum unwind_stop_reason
> +record_btrace_frame_unwind_stop_reason (struct frame_info *this_frame,
> +					void **this_cache)
> +{
> +  return UNWIND_UNAVAILABLE;
> +}
> +
> +/* Implement this_id method for record_btrace_frame_unwind.  */
> +
> +static void
> +record_btrace_frame_this_id (struct frame_info *this_frame, void **this_cache,
> +			     struct frame_id *this_id)
> +{
> +  /* Leave there the outer_frame_id value.  */
> +}
> +
> +/* Implement prev_register method for record_btrace_frame_unwind.  */
> +
> +static struct value *
> +record_btrace_frame_prev_register (struct frame_info *this_frame,
> +				   void **this_cache,
> +				   int regnum)
> +{
> +  throw_error (NOT_AVAILABLE_ERROR,
> +              _("Registers are not available in btrace record history"));
> +}
> +
> +/* Implement sniffer method for record_btrace_frame_unwind.  */
> +
> +static int
> +record_btrace_frame_sniffer (const struct frame_unwind *self,
> +			     struct frame_info *this_frame,
> +			     void **this_cache)
> +{
> +  struct thread_info *tp;
> +  struct btrace_thread_info *btinfo;
> +  struct btrace_insn_iterator *replay;
> +
> +  /* This doesn't seem right.  Yet, I don't see how I could get from a frame
> +     to its thread.  */

That's OK.  Either remove the comment or:
  /* THIS_FRAME does not contain a reference to its thread.  */


> +  tp = find_thread_ptid (inferior_ptid);
> +  if (tp == NULL)
> +    return 0;

  gdb_assert (tp != NULL);


> +
> +  return btrace_is_replaying (tp);
> +}
> +
> +/* btrace recording does not store previous memory content, neither the stack
> +   frames content.  Any unwinding would return errorneous results as the stack
> +   contents no longer matches the changed PC value restored from history.
> +   Therefore this unwinder reports any possibly unwound registers as
> +   <unavailable>.  */
> +
> +static const struct frame_unwind record_btrace_frame_unwind =
> +{
> +  NORMAL_FRAME,
> +  record_btrace_frame_unwind_stop_reason,
> +  record_btrace_frame_this_id,
> +  record_btrace_frame_prev_register,
> +  NULL,
> +  record_btrace_frame_sniffer
> +};
>  /* Initialize the record-btrace target ops.  */
>  
>  static void
> @@ -874,6 +939,7 @@ init_record_btrace_ops (void)
>    ops->to_fetch_registers = record_btrace_fetch_registers;
>    ops->to_store_registers = record_btrace_store_registers;
>    ops->to_prepare_to_store = record_btrace_prepare_to_store;
> +  ops->to_get_unwinder = &record_btrace_frame_unwind;
>    ops->to_stratum = record_stratum;
>    ops->to_magic = OPS_MAGIC;
>  }
> -- 
> 1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 09/24] btrace: add replay position to btrace thread info
  2013-07-03  9:14 ` [patch v4 09/24] btrace: add replay position to btrace thread info Markus Metzger
@ 2013-08-18 19:07   ` Jan Kratochvil
  2013-09-10 13:24     ` Metzger, Markus T
  0 siblings, 1 reply; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:07 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches

On Wed, 03 Jul 2013 11:14:19 +0200, Markus Metzger wrote:
> Add a branch trace instruction iterator pointing to the current replay position
> to the branch trace thread info struct.
> 
> Free the iterator when btrace is cleared.
> 
> Start at the replay position for the instruction and function-call histories.
> 
> 2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>
> 
>     * btrace.h (replay) <replay>: New.
>     (btrace_is_replaying): New.
>     * btrace.c (btrace_clear): Free replay iterator.
>     (btrace_is_replaying): New.
>     * record-btrace.c (record_btrace_is_replaying): New.
>     (record_btrace_info): Print insn number if replaying.
>     (record_btrace_insn_history): Start at replay position.
>     (record_btrace_call_history): Start at replay position.
>     (init_record_btrace_ops): Init to_record_is_replaying.
> 
> 
> ---
>  gdb/btrace.c        |   10 ++++++
>  gdb/btrace.h        |    6 ++++
>  gdb/record-btrace.c |   80 +++++++++++++++++++++++++++++++++++++++++++++-----
>  3 files changed, 88 insertions(+), 8 deletions(-)
> 
> diff --git a/gdb/btrace.c b/gdb/btrace.c
> index 006deaa..0bec2cf 100644
> --- a/gdb/btrace.c
> +++ b/gdb/btrace.c
> @@ -771,9 +771,11 @@ btrace_clear (struct thread_info *tp)
>  
>    xfree (btinfo->insn_history);
>    xfree (btinfo->call_history);
> +  xfree (btinfo->replay);
>  
>    btinfo->insn_history = NULL;
>    btinfo->call_history = NULL;
> +  btinfo->replay = NULL;
>  }
>  
>  /* See btrace.h.  */
> @@ -1371,3 +1373,11 @@ btrace_set_call_history (struct btrace_thread_info *btinfo,
>    btinfo->call_history->begin = *begin;
>    btinfo->call_history->end = *end;
>  }
> +
> +/* See btrace.h.  */
> +
> +int
> +btrace_is_replaying (struct thread_info *tp)
> +{
> +  return tp->btrace.replay != NULL;
> +}
> diff --git a/gdb/btrace.h b/gdb/btrace.h
> index a3322d2..5a5b297 100644
> --- a/gdb/btrace.h
> +++ b/gdb/btrace.h
> @@ -181,6 +181,9 @@ struct btrace_thread_info
>  
>    /* The function call history iterator.  */
>    struct btrace_call_history *call_history;
> +
> +  /* The current replay position.  NULL if not replaying.  */
> +  struct btrace_insn_iterator *replay;
>  };
>  
>  /* Enable branch tracing for a thread.  */
> @@ -301,4 +304,7 @@ extern void btrace_set_call_history (struct btrace_thread_info *,
>  				     const struct btrace_call_iterator *begin,
>  				     const struct btrace_call_iterator *end);
>  
> +/* Determine if branch tracing is currently replaying TP.  */
> +extern int btrace_is_replaying (struct thread_info *tp);
> +
>  #endif /* BTRACE_H */
> diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
> index c7d6e9f..5e41b20 100644
> --- a/gdb/record-btrace.c
> +++ b/gdb/record-btrace.c
> @@ -237,6 +237,10 @@ record_btrace_info (void)
>    printf_unfiltered (_("Recorded %u instructions in %u functions for thread "
>  		       "%d (%s).\n"), insns, calls, tp->num,
>  		     target_pid_to_str (tp->ptid));
> +
> +  if (btrace_is_replaying (tp))
> +    printf_unfiltered (_("Replay in progress.  At instruction %u.\n"),
> +		       btrace_insn_number (btinfo->replay));
>  }
>  
>  /* Print an unsigned int.  */
> @@ -301,13 +305,34 @@ record_btrace_insn_history (int size, int flags)
>    history = btinfo->insn_history;
>    if (history == NULL)
>      {
> -      /* No matter the direction, we start with the tail of the trace.  */
> -      btrace_insn_end (&begin, btinfo);
> -      end = begin;
> +      struct btrace_insn_iterator *replay;
>  
>        DEBUG ("insn-history (0x%x): %d", flags, size);
>  
> -      covered = btrace_insn_prev (&begin, context);
> +      /* If we're replaying, we start at the replay position.  Otherwise, we
> +	 start at the tail of the trace.  */
> +      replay = btinfo->replay;
> +      if (replay != NULL)
> +	begin = *replay;
> +      else
> +	btrace_insn_end (&begin, btinfo);
> +
> +      /* We start from here and expand in the requested direction.  Then we
> +	 expand in the other direction, as well, to fill up any remaining
> +	 context.  */
> +      end = begin;
> +      if (size < 0)
> +	{
> +	  /* We want the current position covered, as well.  */
> +	  covered = btrace_insn_next (&end, 1);
> +	  covered += btrace_insn_prev (&begin, context - covered);
> +	  covered += btrace_insn_next (&end, context - covered);
> +	}
> +      else
> +	{
> +	  covered = btrace_insn_next (&end, context);
> +	  covered += btrace_insn_prev (&begin, context - covered);
> +	}

These two COVERED calculations do not seem right to me, pointer is moving NEXT
and PREV so the directions should be both added and subtracted.


>      }
>    else
>      {
> @@ -562,13 +587,37 @@ record_btrace_call_history (int size, int flags)
>    history = btinfo->call_history;
>    if (history == NULL)
>      {
> -      /* No matter the direction, we start with the tail of the trace.  */
> -      btrace_call_end (&begin, btinfo);
> -      end = begin;
> +      struct btrace_insn_iterator *replay;
>  
>        DEBUG ("call-history (0x%x): %d", flags, size);
>  
> -      covered = btrace_call_prev (&begin, context);
> +      /* If we're replaying, we start at the replay position.  Otherwise, we
> +	 start at the tail of the trace.  */
> +      replay = btinfo->replay;
> +      if (replay != NULL)
> +	{
> +	  begin.function = replay->function;
> +	  begin.btinfo = btinfo;
> +	}
> +      else
> +	btrace_call_end (&begin, btinfo);
> +
> +      /* We start from here and expand in the requested direction.  Then we
> +	 expand in the other direction, as well, to fill up any remaining
> +	 context.  */
> +      end = begin;
> +      if (size < 0)
> +	{
> +	  /* We want the current position covered, as well.  */
> +	  covered = btrace_call_next (&end, 1);
> +	  covered += btrace_call_prev (&begin, context - covered);
> +	  covered += btrace_call_next (&end, context - covered);
> +	}
> +      else
> +	{
> +	  covered = btrace_call_next (&end, context);
> +	  covered += btrace_call_prev (&begin, context- covered);
> +	}

These two COVERED calculations do not seem right to me, pointer is moving NEXT
and PREV so the directions should be both added and subtracted.


>      }
>    else
>      {
> @@ -689,6 +738,20 @@ record_btrace_call_history_from (ULONGEST from, int size, int flags)
>    record_btrace_call_history_range (begin, end, flags);
>  }
>  
> +/* The to_record_is_replaying method of target record-btrace.  */
> +
> +static int
> +record_btrace_is_replaying (void)
> +{
> +  struct thread_info *tp;
> +
> +  ALL_THREADS (tp)
> +    if (btrace_is_replaying (tp))
> +      return 1;
> +
> +  return 0;
> +}
> +
>  /* Initialize the record-btrace target ops.  */
>  
>  static void
> @@ -715,6 +778,7 @@ init_record_btrace_ops (void)
>    ops->to_call_history = record_btrace_call_history;
>    ops->to_call_history_from = record_btrace_call_history_from;
>    ops->to_call_history_range = record_btrace_call_history_range;
> +  ops->to_record_is_replaying = record_btrace_is_replaying;
>    ops->to_stratum = record_stratum;
>    ops->to_magic = OPS_MAGIC;
>  }
> -- 
> 1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 10/24] target: add ops parameter to to_prepare_to_store method
  2013-07-03  9:14 ` [patch v4 10/24] target: add ops parameter to to_prepare_to_store method Markus Metzger
@ 2013-08-18 19:07   ` Jan Kratochvil
  0 siblings, 0 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:07 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches

On Wed, 03 Jul 2013 11:14:20 +0200, Markus Metzger wrote:
> To allow forwarding the prepare_to_store request to the target beneath,
> add a target_ops * parameter.
> 
> 2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>
> 
> 	* target.h (target_ops) <to_prepare_to_store>: Add parameter.
> 	(target_prepare_to_store): Remove macro.  New function.
> 	* target.c (update_current_target): Do not inherit/default
> 	prepare_to_store.
> 	(target_prepare_to_store): New.
> 	(debug_to_prepare_to_store): Remove.
> 	* remote.c (remote_prepare_to_store): Add parameter.
> 	* remote-mips.c (mips_prepare_to_store): Add parameter.
> 	* remote-m32r-sdi.c (m32r_prepare_to_store): Add parameter.
> 	* ravenscar-thread.c (ravenscar_prepare_to_store): Add
> 	parameter.
> 	* monitor.c (monitor_prepare_to_store): Add parameter.
> 	* inf-child.c (inf_child_prepare_to_store): Add parameter.
> 
> 
> ---
>  gdb/inf-child.c        |    2 +-
>  gdb/monitor.c          |    2 +-
>  gdb/ravenscar-thread.c |    7 ++++---
>  gdb/record-full.c      |    3 ++-
>  gdb/remote-m32r-sdi.c  |    2 +-
>  gdb/remote-mips.c      |    5 +++--
>  gdb/remote.c           |    5 +++--
>  gdb/target.c           |   36 +++++++++++++++++++++---------------
>  gdb/target.h           |    5 ++---
>  9 files changed, 38 insertions(+), 29 deletions(-)
> 
> diff --git a/gdb/inf-child.c b/gdb/inf-child.c
> index f5992bb..3be4315 100644
> --- a/gdb/inf-child.c
> +++ b/gdb/inf-child.c
> @@ -100,7 +100,7 @@ inf_child_post_attach (int pid)
>     program being debugged.  */
>  
>  static void
> -inf_child_prepare_to_store (struct regcache *regcache)
> +inf_child_prepare_to_store (struct target_ops *ops, struct regcache *regcache)
>  {
>  }
>  
> diff --git a/gdb/monitor.c b/gdb/monitor.c
> index beca4e4..8b1059c 100644
> --- a/gdb/monitor.c
> +++ b/gdb/monitor.c
> @@ -1427,7 +1427,7 @@ monitor_store_registers (struct target_ops *ops,
>     debugged.  */
>  
>  static void
> -monitor_prepare_to_store (struct regcache *regcache)
> +monitor_prepare_to_store (struct target_ops *ops, struct regcache *regcache)
>  {
>    /* Do nothing, since we can store individual regs.  */
>  }
> diff --git a/gdb/ravenscar-thread.c b/gdb/ravenscar-thread.c
> index 0a3100d..adcd3a2 100644
> --- a/gdb/ravenscar-thread.c
> +++ b/gdb/ravenscar-thread.c
> @@ -62,7 +62,8 @@ static void ravenscar_fetch_registers (struct target_ops *ops,
>                                         struct regcache *regcache, int regnum);
>  static void ravenscar_store_registers (struct target_ops *ops,
>                                         struct regcache *regcache, int regnum);
> -static void ravenscar_prepare_to_store (struct regcache *regcache);
> +static void ravenscar_prepare_to_store (struct target_ops *ops,
> +					struct regcache *regcache);
>  static void ravenscar_resume (struct target_ops *ops, ptid_t ptid, int step,
>  			      enum gdb_signal siggnal);
>  static void ravenscar_mourn_inferior (struct target_ops *ops);
> @@ -303,14 +304,14 @@ ravenscar_store_registers (struct target_ops *ops,
>  }
>  
>  static void
> -ravenscar_prepare_to_store (struct regcache *regcache)
> +ravenscar_prepare_to_store (struct target_ops *ops, struct regcache *regcache)
>  {
>    struct target_ops *beneath = find_target_beneath (&ravenscar_ops);
>  
>    if (!ravenscar_runtime_initialized ()
>        || ptid_equal (inferior_ptid, base_magic_null_ptid)
>        || ptid_equal (inferior_ptid, ravenscar_running_thread ()))
> -    beneath->to_prepare_to_store (regcache);
> +    beneath->to_prepare_to_store (beneath, regcache);
>    else
>      {
>        struct gdbarch *gdbarch = get_regcache_arch (regcache);
> diff --git a/gdb/record-full.c b/gdb/record-full.c
> index 3a8d326..058da8a 100644
> --- a/gdb/record-full.c
> +++ b/gdb/record-full.c
> @@ -2148,7 +2148,8 @@ record_full_core_fetch_registers (struct target_ops *ops,
>  /* "to_prepare_to_store" method for prec over corefile.  */
>  
>  static void
> -record_full_core_prepare_to_store (struct regcache *regcache)
> +record_full_core_prepare_to_store (struct target_ops *ops,
> +				   struct regcache *regcache)
>  {
>  }
>  
> diff --git a/gdb/remote-m32r-sdi.c b/gdb/remote-m32r-sdi.c
> index 2f910e6..1955ec1 100644
> --- a/gdb/remote-m32r-sdi.c
> +++ b/gdb/remote-m32r-sdi.c
> @@ -1013,7 +1013,7 @@ m32r_store_register (struct target_ops *ops,
>     debugged.  */
>  
>  static void
> -m32r_prepare_to_store (struct regcache *regcache)
> +m32r_prepare_to_store (struct target_ops *target, struct regcache *regcache)
>  {
>    /* Do nothing, since we can store individual regs.  */
>    if (remote_debug)
> diff --git a/gdb/remote-mips.c b/gdb/remote-mips.c
> index 1619622..5aa57f1 100644
> --- a/gdb/remote-mips.c
> +++ b/gdb/remote-mips.c
> @@ -92,7 +92,8 @@ static int mips_map_regno (struct gdbarch *, int);
>  
>  static void mips_set_register (int regno, ULONGEST value);
>  
> -static void mips_prepare_to_store (struct regcache *regcache);
> +static void mips_prepare_to_store (struct target_ops *ops,
> +				   struct regcache *regcache);
>  
>  static int mips_fetch_word (CORE_ADDR addr, unsigned int *valp);
>  
> @@ -2069,7 +2070,7 @@ mips_fetch_registers (struct target_ops *ops,
>     registers, so this function doesn't have to do anything.  */
>  
>  static void
> -mips_prepare_to_store (struct regcache *regcache)
> +mips_prepare_to_store (struct target_ops *ops, struct regcache *regcache)
>  {
>  }
>  
> diff --git a/gdb/remote.c b/gdb/remote.c
> index 1d6ac90..b352ca6 100644
> --- a/gdb/remote.c
> +++ b/gdb/remote.c
> @@ -101,7 +101,8 @@ static void async_remote_interrupt_twice (gdb_client_data);
>  
>  static void remote_files_info (struct target_ops *ignore);
>  
> -static void remote_prepare_to_store (struct regcache *regcache);
> +static void remote_prepare_to_store (struct target_ops *ops,
> +				     struct regcache *regcache);
>  
>  static void remote_open (char *name, int from_tty);
>  
> @@ -6348,7 +6349,7 @@ remote_fetch_registers (struct target_ops *ops,
>     first.  */
>  
>  static void
> -remote_prepare_to_store (struct regcache *regcache)
> +remote_prepare_to_store (struct target_ops *ops, struct regcache *regcache)
>  {
>    struct remote_arch_state *rsa = get_remote_arch_state ();
>    int i;
> diff --git a/gdb/target.c b/gdb/target.c
> index 920f916..ecffc9c 100644
> --- a/gdb/target.c
> +++ b/gdb/target.c
> @@ -96,8 +96,6 @@ static struct target_ops debug_target;
>  
>  static void debug_to_open (char *, int);
>  
> -static void debug_to_prepare_to_store (struct regcache *);
> -
>  static void debug_to_files_info (struct target_ops *);
>  
>  static int debug_to_insert_breakpoint (struct gdbarch *,
> @@ -623,7 +621,7 @@ update_current_target (void)
>        /* Do not inherit to_wait.  */
>        /* Do not inherit to_fetch_registers.  */
>        /* Do not inherit to_store_registers.  */
> -      INHERIT (to_prepare_to_store, t);
> +      /* Do not inherit to_prepare_to_store.  */
>        INHERIT (deprecated_xfer_memory, t);
>        INHERIT (to_files_info, t);
>        INHERIT (to_insert_breakpoint, t);
> @@ -757,9 +755,6 @@ update_current_target (void)
>    de_fault (to_post_attach,
>  	    (void (*) (int))
>  	    target_ignore);
> -  de_fault (to_prepare_to_store,
> -	    (void (*) (struct regcache *))
> -	    noprocess);
>    de_fault (deprecated_xfer_memory,
>  	    (int (*) (CORE_ADDR, gdb_byte *, int, int,
>  		      struct mem_attrib *, struct target_ops *))
> @@ -4033,6 +4028,26 @@ target_store_registers (struct regcache *regcache, int regno)
>    noprocess ();
>  }
>  
> +/* See target.h.  */
> +
> +void
> +target_prepare_to_store (struct regcache *regcache)
> +{
> +  struct target_ops *t;
> +
> +  for (t = current_target.beneath; t != NULL; t = t->beneath)
> +    {
> +      if (t->to_prepare_to_store != NULL)
> +	{
> +	  t->to_prepare_to_store (t, regcache);
> +	  if (targetdebug)
> +	    fprintf_unfiltered (gdb_stdlog, "target_prepare_to_store");

	    fprintf_unfiltered (gdb_stdlog, "target_prepare_to_store ()\n");


> +
> +	  return;
> +	}
> +    }
> +}
> +
>  int
>  target_core_of_thread (ptid_t ptid)
>  {
> @@ -4485,14 +4500,6 @@ target_call_history_range (ULONGEST begin, ULONGEST end, int flags)
>    tcomplain ();
>  }
>  
> -static void
> -debug_to_prepare_to_store (struct regcache *regcache)
> -{
> -  debug_target.to_prepare_to_store (regcache);
> -
> -  fprintf_unfiltered (gdb_stdlog, "target_prepare_to_store ()\n");
> -}
> -
>  static int
>  deprecated_debug_xfer_memory (CORE_ADDR memaddr, bfd_byte *myaddr, int len,
>  			      int write, struct mem_attrib *attrib,
> @@ -4944,7 +4951,6 @@ setup_target_debug (void)
>  
>    current_target.to_open = debug_to_open;
>    current_target.to_post_attach = debug_to_post_attach;
> -  current_target.to_prepare_to_store = debug_to_prepare_to_store;
>    current_target.deprecated_xfer_memory = deprecated_debug_xfer_memory;
>    current_target.to_files_info = debug_to_files_info;
>    current_target.to_insert_breakpoint = debug_to_insert_breakpoint;
> diff --git a/gdb/target.h b/gdb/target.h
> index 1bf716e..e890999 100644
> --- a/gdb/target.h
> +++ b/gdb/target.h
> @@ -434,7 +434,7 @@ struct target_ops
>  		       ptid_t, struct target_waitstatus *, int);
>      void (*to_fetch_registers) (struct target_ops *, struct regcache *, int);
>      void (*to_store_registers) (struct target_ops *, struct regcache *, int);
> -    void (*to_prepare_to_store) (struct regcache *);
> +    void (*to_prepare_to_store) (struct target_ops *, struct regcache *);
>  
>      /* Transfer LEN bytes of memory between GDB address MYADDR and
>         target address MEMADDR.  If WRITE, transfer them to the target, else
> @@ -1055,8 +1055,7 @@ extern void target_store_registers (struct regcache *regcache, int regs);
>     that REGISTERS contains all the registers from the program being
>     debugged.  */
>  
> -#define	target_prepare_to_store(regcache)	\
> -     (*current_target.to_prepare_to_store) (regcache)
> +extern void target_prepare_to_store (struct regcache *);
>  
>  /* Determine current address space of thread PTID.  */
>  
> -- 
> 1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 18/24] record-btrace: extend unwinder
  2013-07-03  9:15 ` [patch v4 18/24] record-btrace: extend unwinder Markus Metzger
@ 2013-08-18 19:08   ` Jan Kratochvil
  2013-09-16 11:21     ` Metzger, Markus T
  0 siblings, 1 reply; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:08 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches

On Wed, 03 Jul 2013 11:14:28 +0200, Markus Metzger wrote:
> Extend the always failing unwinder to provide the PC based on the call structure
> detected in the branch trace.
> 
> There are several open points:
> 
> An assertion in get_frame_id at frame.c:340 requires that a frame provides a
> stack address.  The record-btrace unwinder can't provide this since the trace
> does not contain data.  I incorrectly set stack_addr_p to 1 to avoid the
> assertion.

Primarily record-btrace can provide the stack address.  You know $sp at the
end of the recoding and you can query .eh_frame/.debug_frame at any PC address
what is the difference between $sp and caller's $sp at that exact PC.
This assumes either all the involved binaries were built with
-fasynchronous-unwind-tables (for .eh_frame) or that debug info
(for .debug_frame) is present.  The former is true in Fedora / Red Hat
distros, unaware how others.

execute_cfa_program() will produce struct dwarf2_frame_state where you are
interested in regs.cfa_* fields.

It could be even interesting to the users for putting hardware watchpoint on
some stack variable location and re-running the code (so that one workarounds
the missing recorded data).  Sure there may be many false positives in such
cases.  Not sure how much useful that would be in practice, I use similar
debugging method for heap data locations (also with various false positives).

The current method of constant STACK_ADDR may have some problems with
frame_id_inner() but I did not investigate it more.


> When evaluating arguments for printing the stack back trace, there's an ugly
> error displayed: "error reading variable: can't compute CFA for this frame".
> The error is correct, we can't compute the CFA since we don't have the stack at
> that time, but it is rather annoying at this place and makes the back trace
> difficult to read.

This would also change, at least the error would be different or so.


> 
> Now that we set the PC to a different value and provide a fake unwinder, we have
> the potential to affect almost every other command.  How can this be tested
> sufficiently?  I added a few tests for the intended functionality, but nothing
> so far to ensure that it does not break some other command when used in this
> context.
> 
> Reviewed-by: Eli Zaretskii  <eliz@gnu.org>
> 2013-04-24  Markus Metzger  <markus.t.metzger@intel.com>
> 
> 	* frame.h (enum frame_type) <BTRACE_FRAME>: New.
> 	* record-btrace.c: Include hashtab.h.
> 	(btrace_get_bfun_name): New.
> 	(btrace_call_history): Call btrace_get_bfun_name.
> 	(struct btrace_frame_cache): New.
> 	(bfcache): New.
> 	(bfcache_hash, bfcache_eq, bfcache_new): New.
> 	(btrace_get_frame_function): New.
> 	(record_btrace_frame_unwind_stop_reason): Allow unwinding.
> 	(record_btrace_frame_this_id): Compute own id.
> 	(record_btrace_frame_prev_register): Provide PC, throw_error
> 	for all other registers.
> 	(record_btrace_frame_sniffer): Detect btrace frames.
> 	(record_btrace_frame_dealloc_cache): New.
> 	(record_btrace_frame_unwind): Add new functions.
> 	(_initialize_record_btrace): Allocate cache.
> 	* btrace.c (btrace_clear): Call reinit_frame_cache.
> 	* NEWS: Announce it.
> 
> testsuite/
> 	* gdb.btrace/record_goto.exp: Add backtrace test.
> 	* gdb.btrace/tailcall.exp: Add backtrace test.
> 
> 
> ---
>  gdb/NEWS                                 |    2 +
>  gdb/btrace.c                             |    4 +
>  gdb/frame.h                              |    4 +-
>  gdb/record-btrace.c                      |  259 +++++++++++++++++++++++++++---
>  gdb/testsuite/gdb.btrace/record_goto.exp |   13 ++
>  gdb/testsuite/gdb.btrace/tailcall.exp    |   17 ++
>  6 files changed, 279 insertions(+), 20 deletions(-)
> 
> diff --git a/gdb/NEWS b/gdb/NEWS
> index bfe4dd4..9b9de71 100644
> --- a/gdb/NEWS
> +++ b/gdb/NEWS
> @@ -14,6 +14,8 @@ Nios II GNU/Linux		nios2*-*-linux
>  Texas Instruments MSP430	msp430*-*-elf
>  
>  * The btrace record target supports the 'record goto' command.
> +  For locations inside the execution trace, the back trace is computed
> +  based on the information stored in the execution trace.
>  
>  * The command 'record function-call-history' supports a new modifier '/c' to
>    indent the function names based on their call stack depth.
> diff --git a/gdb/btrace.c b/gdb/btrace.c
> index 0bec2cf..822926c 100644
> --- a/gdb/btrace.c
> +++ b/gdb/btrace.c
> @@ -755,6 +755,10 @@ btrace_clear (struct thread_info *tp)
>  
>    DEBUG ("clear thread %d (%s)", tp->num, target_pid_to_str (tp->ptid));
>  
> +  /* Make sure btrace frames that may hold a pointer into the branch
> +     trace data are destroyed.  */
> +  reinit_frame_cache ();
> +
>    btinfo = &tp->btrace;
>  
>    it = btinfo->begin;
> diff --git a/gdb/frame.h b/gdb/frame.h
> index 31b9cb7..db4cc52 100644
> --- a/gdb/frame.h
> +++ b/gdb/frame.h
> @@ -216,7 +216,9 @@ enum frame_type
>    ARCH_FRAME,
>    /* Sentinel or registers frame.  This frame obtains register values
>       direct from the inferior's registers.  */
> -  SENTINEL_FRAME
> +  SENTINEL_FRAME,
> +  /* A branch tracing frame.  */
> +  BTRACE_FRAME
>  };
>  
>  /* For every stopped thread, GDB tracks two frames: current and
> diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
> index d6508bd..a528f8b 100644
> --- a/gdb/record-btrace.c
> +++ b/gdb/record-btrace.c
> @@ -34,6 +34,7 @@
>  #include "filenames.h"
>  #include "regcache.h"
>  #include "frame-unwind.h"
> +#include "hashtab.h"
>  
>  /* The target_ops of record-btrace.  */
>  static struct target_ops record_btrace_ops;
> @@ -507,6 +508,28 @@ btrace_call_history_src_line (struct ui_out *uiout,
>    ui_out_field_int (uiout, "max line", end);
>  }
>  
> +/* Get the name of a branch trace function.  */
> +
> +static const char *
> +btrace_get_bfun_name (const struct btrace_function *bfun)
> +{
> +  struct minimal_symbol *msym;
> +  struct symbol *sym;
> +
> +  if (bfun == NULL)
> +    return "<none>";

_("<none>")


> +
> +  msym = bfun->msym;
> +  sym = bfun->sym;
> +
> +  if (sym != NULL)
> +    return SYMBOL_PRINT_NAME (sym);
> +  else if (msym != NULL)
> +    return SYMBOL_PRINT_NAME (msym);
> +  else
> +    return "<unknown>";

_("<unknown>")


> +}
> +
>  /* Disassemble a section of the recorded function trace.  */
>  
>  static void
> @@ -524,12 +547,8 @@ btrace_call_history (struct ui_out *uiout,
>    for (it = *begin; btrace_call_cmp (&it, end) != 0; btrace_call_next (&it, 1))
>      {
>        const struct btrace_function *bfun;
> -      struct minimal_symbol *msym;
> -      struct symbol *sym;
>  
>        bfun = btrace_call_get (&it);
> -      msym = bfun->msym;
> -      sym = bfun->sym;
>  
>        /* Print the function index.  */
>        ui_out_field_uint (uiout, "index", bfun->number);
> @@ -543,12 +562,7 @@ btrace_call_history (struct ui_out *uiout,
>  	    ui_out_text (uiout, "  ");
>  	}
>  
> -      if (sym != NULL)
> -	ui_out_field_string (uiout, "function", SYMBOL_PRINT_NAME (sym));
> -      else if (msym != NULL)
> -	ui_out_field_string (uiout, "function", SYMBOL_PRINT_NAME (msym));
> -      else
> -	ui_out_field_string (uiout, "function", "<unknown>");
> +      ui_out_field_string (uiout, "function", btrace_get_bfun_name (bfun));
>  
>        if ((flags & RECORD_PRINT_INSN_RANGE) != 0)
>  	{
> @@ -902,13 +916,100 @@ record_btrace_prepare_to_store (struct target_ops *ops,
>        }
>  }
>  
> +/* The branch trace frame cache.  */
> +
> +struct btrace_frame_cache
> +{
> +  /* The thread.  */
> +  struct thread_info *tp;
> +
> +  /* The frame info.  */
> +  struct frame_info *frame;
> +
> +  /* The branch trace function segment.  */
> +  const struct btrace_function *bfun;
> +
> +  /* The return PC into this frame.  */
> +  CORE_ADDR pc;
> +};
> +
> +/* A struct btrace_frame_cache hash table indexed by NEXT.  */
> +
> +static htab_t bfcache;
> +
> +/* hash_f for htab_create_alloc of bfcache.  */
> +
> +static hashval_t
> +bfcache_hash (const void *arg)
> +{
> +  const struct btrace_frame_cache *cache = arg;
> +
> +  return htab_hash_pointer (cache->frame);
> +}
> +
> +/* eq_f for htab_create_alloc of bfcache.  */
> +
> +static int
> +bfcache_eq (const void *arg1, const void *arg2)
> +{
> +  const struct btrace_frame_cache *cache1 = arg1;
> +  const struct btrace_frame_cache *cache2 = arg2;
> +
> +  return cache1->frame == cache2->frame;
> +}
> +
> +/* Create a new btrace frame cache.  */
> +
> +static struct btrace_frame_cache *
> +bfcache_new (struct frame_info *frame)
> +{
> +  struct btrace_frame_cache *cache;
> +  void **slot;
> +
> +  cache = FRAME_OBSTACK_ZALLOC (struct btrace_frame_cache);
> +  cache->frame = frame;
> +
> +  slot = htab_find_slot (bfcache, cache, INSERT);
> +  gdb_assert (*slot == NULL);
> +  *slot = cache;
> +
> +  return cache;
> +}
> +
> +/* Extract the branch trace function from a branch trace frame.  */
> +
> +static const struct btrace_function *
> +btrace_get_frame_function (struct frame_info *frame)
> +{
> +  const struct btrace_frame_cache *cache;
> +  const struct btrace_function *bfun;
> +  struct btrace_frame_cache pattern;
> +  void **slot;
> +
> +  pattern.frame = frame;
> +
> +  slot = htab_find_slot (bfcache, &pattern, NO_INSERT);
> +  if (slot == NULL)
> +    return NULL;
> +
> +  cache = *slot;
> +  return cache->bfun;
> +}
> +
>  /* Implement stop_reason method for record_btrace_frame_unwind.  */
>  
>  static enum unwind_stop_reason
>  record_btrace_frame_unwind_stop_reason (struct frame_info *this_frame,
>  					void **this_cache)
>  {
> -  return UNWIND_UNAVAILABLE;
> +  const struct btrace_frame_cache *cache;
> +
> +  cache = *this_cache;
> +
> +  if (cache->bfun == NULL)
> +    return UNWIND_UNAVAILABLE;
> +
> +  return UNWIND_NO_REASON;
>  }
>  
>  /* Implement this_id method for record_btrace_frame_unwind.  */
> @@ -917,7 +1018,21 @@ static void
>  record_btrace_frame_this_id (struct frame_info *this_frame, void **this_cache,
>  			     struct frame_id *this_id)
>  {
> -  /* Leave there the outer_frame_id value.  */
> +  const struct btrace_frame_cache *cache;
> +  CORE_ADDR stack, code, special;
> +
> +  cache = *this_cache;
> +
> +  stack = 0;
> +  code = get_frame_func (this_frame);
> +  special = (CORE_ADDR) cache->bfun;
> +
> +  *this_id = frame_id_build_special (stack, code, special);
> +
> +  DEBUG ("[frame] %s id: (!stack, pc=%s, special=%s)",
> +	 btrace_get_bfun_name (cache->bfun),
> +	 core_addr_to_string_nz (this_id->code_addr),
> +	 core_addr_to_string_nz (this_id->special_addr));
>  }
>  
>  /* Implement prev_register method for record_btrace_frame_unwind.  */
> @@ -927,8 +1042,31 @@ record_btrace_frame_prev_register (struct frame_info *this_frame,
>  				   void **this_cache,
>  				   int regnum)
>  {
> -  throw_error (NOT_AVAILABLE_ERROR,
> -              _("Registers are not available in btrace record history"));
> +  const struct btrace_frame_cache *cache;
> +  const struct btrace_function *bfun;
> +  struct gdbarch *gdbarch;
> +  CORE_ADDR pc;
> +  int pcreg;
> +
> +  gdbarch = get_frame_arch (this_frame);
> +  pcreg = gdbarch_pc_regnum (gdbarch);
> +  if (pcreg < 0 || regnum != pcreg)
> +    throw_error (NOT_AVAILABLE_ERROR,
> +		 _("Registers are not available in btrace record history"));
> +
> +  cache = *this_cache;
> +  bfun = cache->bfun;
> +  if (bfun == NULL)
> +    throw_error (NOT_AVAILABLE_ERROR,
> +		 _("Registers are not available in btrace record history"));
> +
> +  pc = cache->pc;
> +
> +  DEBUG ("[frame] unwound PC for %s on level %d: %s",
> +	 btrace_get_bfun_name (bfun), bfun->level,
> +	 core_addr_to_string_nz (pc));
> +
> +  return frame_unwind_got_address (this_frame, regnum, pc);
>  }
>  
>  /* Implement sniffer method for record_btrace_frame_unwind.  */
> @@ -938,9 +1076,14 @@ record_btrace_frame_sniffer (const struct frame_unwind *self,
>  			     struct frame_info *this_frame,
>  			     void **this_cache)
>  {
> +  const struct btrace_thread_info *btinfo;
> +  const struct btrace_insn_iterator *replay;
> +  const struct btrace_insn *insn;
> +  const struct btrace_function *bfun, *caller;
> +  struct btrace_frame_cache *cache;
>    struct thread_info *tp;
> -  struct btrace_thread_info *btinfo;
> -  struct btrace_insn_iterator *replay;
> +  struct frame_info *next;
> +  CORE_ADDR pc;
>  
>    /* This doesn't seem right.  Yet, I don't see how I could get from a frame
>       to its thread.  */
> @@ -948,7 +1091,81 @@ record_btrace_frame_sniffer (const struct frame_unwind *self,
>    if (tp == NULL)
>      return 0;
>  
> -  return btrace_is_replaying (tp);
> +  replay = tp->btrace.replay;
> +  if (replay == NULL)
> +    return 0;
> +
> +  /* Find the next frame's branch trace function.  */
> +  next = get_next_frame (this_frame);
> +  if (next == NULL)
> +    {
> +      /* The sentinel frame below corresponds to our replay position.  */
> +      bfun = replay->function;
> +    }
> +  else
> +    {
> +      /* This is an outer frame.  It must be the predecessor of another
> +	 branch trace frame.  Let's get this frame's branch trace function
> +	 so we can compute our own.  */
> +      bfun = btrace_get_frame_function (next);
> +    }
> +
> +  /* If we did not find a branch trace function, this is not our frame.  */
> +  if (bfun == NULL)
> +    return 0;
> +
> +  /* Go up to the calling function segment.  */
> +  caller = bfun->up;
> +  pc = 0;
> +
> +  /* Determine where to find the PC in the upper function segment.  */
> +  if (caller != NULL)
> +    {
> +      if ((bfun->flags & BFUN_UP_LINKS_TO_RET) != 0)
> +	{
> +	  insn = VEC_index (btrace_insn_s, caller->insn, 0);
> +	  pc = insn->pc;
> +	}
> +      else
> +	{
> +	  insn = VEC_last (btrace_insn_s, caller->insn);
> +	  pc = insn->pc;
> +
> +	  /* We link directly to the jump instruction in the case of a tail
> +	     call, since the next instruction will likely be outside of the
> +	     caller function.  */
> +	  if ((bfun->flags & BFUN_UP_LINKS_TO_TAILCALL) == 0)
> +	    pc += gdb_insn_length (get_frame_arch (this_frame), pc);
> +	}
> +
> +      DEBUG ("[frame] sniffed frame for %s on level %d",
> +	     btrace_get_bfun_name (caller), caller->level);
> +    }
> +  else
> +    DEBUG ("[frame] sniffed top btrace frame");
> +
> +  /* This is our frame.  Initialize the frame cache.  */
> +  cache = bfcache_new (this_frame);
> +  cache->tp = tp;
> +  cache->bfun = caller;
> +  cache->pc = pc;
> +
> +  *this_cache = cache;
> +  return 1;
> +}
> +
> +static void
> +record_btrace_frame_dealloc_cache (struct frame_info *self, void *this_cache)
> +{
> +  struct btrace_frame_cache *cache;
> +  void **slot;
> +
> +  cache = this_cache;
> +
> +  slot = htab_find_slot (bfcache, cache, NO_INSERT);
> +  gdb_assert (slot != NULL);
> +
> +  htab_remove_elt (bfcache, cache);
>  }
>  
>  /* btrace recording does not store previous memory content, neither the stack
> @@ -959,12 +1176,13 @@ record_btrace_frame_sniffer (const struct frame_unwind *self,
>  
>  static const struct frame_unwind record_btrace_frame_unwind =
>  {
> -  NORMAL_FRAME,
> +  BTRACE_FRAME,
>    record_btrace_frame_unwind_stop_reason,
>    record_btrace_frame_this_id,
>    record_btrace_frame_prev_register,
>    NULL,
> -  record_btrace_frame_sniffer
> +  record_btrace_frame_sniffer,
> +  record_btrace_frame_dealloc_cache
>  };
>  
>  /* The to_resume method of target record-btrace.  */
> @@ -1178,4 +1396,7 @@ _initialize_record_btrace (void)
>  
>    init_record_btrace_ops ();
>    add_target (&record_btrace_ops);
> +
> +  bfcache = htab_create_alloc (50, bfcache_hash, bfcache_eq, NULL,
> +			       xcalloc, xfree);
>  }
> diff --git a/gdb/testsuite/gdb.btrace/record_goto.exp b/gdb/testsuite/gdb.btrace/record_goto.exp
> index a9f9a64..8477a03 100644
> --- a/gdb/testsuite/gdb.btrace/record_goto.exp
> +++ b/gdb/testsuite/gdb.btrace/record_goto.exp
> @@ -75,6 +75,19 @@ gdb_test "record instruction-history" "
>  gdb_test "record goto 26" "
>  .*fun3 \\(\\) at record_goto.c:35.*" "record_goto - goto 26"
>  
> +# check the back trace at that location
> +gdb_test "backtrace" "
> +#0.*fun3.*at record_goto.c:35.*\r
> +#1.*fun4.*at record_goto.c:44.*\r
> +#2.*main.*at record_goto.c:51.*\r
> +Backtrace stopped: not enough registers or memory available to unwind further" "backtrace at 25"
> +
> +# walk the backtrace
> +gdb_test "up" "
> +.*fun4.*at record_goto.c:44.*" "up to fun4"
> +gdb_test "up" "
> +.*main.*at record_goto.c:51.*" "up to main"
> +
>  # the function call history should start at the new location
>  gdb_test "record function-call-history /ci -" "
>  8\t    fun3\tinst 19,21\r
> diff --git a/gdb/testsuite/gdb.btrace/tailcall.exp b/gdb/testsuite/gdb.btrace/tailcall.exp
> index cf9fdf3..ada4b14 100644
> --- a/gdb/testsuite/gdb.btrace/tailcall.exp
> +++ b/gdb/testsuite/gdb.btrace/tailcall.exp
> @@ -47,3 +47,20 @@ gdb_test "record function-call-history /c 1" "
>  1\t  foo\r
>  2\t    bar\r
>  3\tmain" "tailcall - calls indented"
> +
> +# go into bar
> +gdb_test "record goto 3" "
> +.*bar \\(\\) at .*x86-tailcall.c:24.*" "go to bar"
> +
> +# check the backtrace
> +gdb_test "backtrace" "
> +#0.*bar.*at .*x86-tailcall.c:24.*\r
> +#1.*foo.*at .*x86-tailcall.c:29.*\r
> +#2.*main.*at .*x86-tailcall.c:37.*\r
> +Backtrace stopped: not enough registers or memory available to unwind further" "backtrace in bar"
> +
> +# walk the backtrace
> +gdb_test "up" "
> +.*foo \\(\\) at .*x86-tailcall.c:29.*" "up to foo"
> +gdb_test "up" "
> +.*main \\(\\) at .*x86-tailcall.c:37.*" "up to main"
> -- 
> 1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 17/24] record-btrace: add record goto target methods
  2013-07-03  9:15 ` [patch v4 17/24] record-btrace: add record goto target methods Markus Metzger
@ 2013-08-18 19:08   ` Jan Kratochvil
  0 siblings, 0 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:08 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches, Christian Himpel

On Wed, 03 Jul 2013 11:14:27 +0200, Markus Metzger wrote:
> Reviewed-by: Eli Zaretskii  <eliz@gnu.org>
> CC: Christian Himpel  <christian.himpel@intel.com>
> 2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>
> 
> 	* record-btrace.c (record_btrace_set_replay,
> 	record_btrace_goto_begin, record_btrace_goto_end,
> 	record_btrace_goto): New.
> 	(init_record_btrace_ops): Initialize them.
> 	* NEWS: Announce it.
> 
> testsuite/
> 	* gdb.btrace/Makefile.in (EXECUTABLES): Add record_goto.
> 	* gdb.btrace/record_goto.c: New.
> 	* gdb.btrace/record_goto.exp: New.
> 	* gdb.btrace/x86-record_goto.S: New.
> 
> 
> ---
>  gdb/NEWS                                   |    2 +
>  gdb/record-btrace.c                        |   91 ++++++++
>  gdb/testsuite/gdb.btrace/Makefile.in       |    2 +-
>  gdb/testsuite/gdb.btrace/record_goto.c     |   51 +++++
>  gdb/testsuite/gdb.btrace/record_goto.exp   |  152 +++++++++++++
>  gdb/testsuite/gdb.btrace/x86-record_goto.S |  332 ++++++++++++++++++++++++++++
>  6 files changed, 629 insertions(+), 1 deletions(-)
>  create mode 100644 gdb/testsuite/gdb.btrace/record_goto.c
>  create mode 100644 gdb/testsuite/gdb.btrace/record_goto.exp
>  create mode 100644 gdb/testsuite/gdb.btrace/x86-record_goto.S
> 
> diff --git a/gdb/NEWS b/gdb/NEWS
> index 6ac910a..bfe4dd4 100644
> --- a/gdb/NEWS
> +++ b/gdb/NEWS
> @@ -13,6 +13,8 @@ Nios II ELF 			nios2*-*-elf
>  Nios II GNU/Linux		nios2*-*-linux
>  Texas Instruments MSP430	msp430*-*-elf
>  
> +* The btrace record target supports the 'record goto' command.
> +
>  * The command 'record function-call-history' supports a new modifier '/c' to
>    indent the function names based on their call stack depth.
>    The fields for the '/i' and '/l' modifier have been reordered.
> diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
> index 2b552d5..d6508bd 100644
> --- a/gdb/record-btrace.c
> +++ b/gdb/record-btrace.c
> @@ -1023,6 +1023,94 @@ record_btrace_find_new_threads (struct target_ops *ops)
>        }
>  }
>  
> +/* Set the replay branch trace instruction iterator.  */

Describe that IT may be NULL and what that means.
(I would require IT != NULL but that does not matter much.)


> +
> +static void
> +record_btrace_set_replay (struct btrace_thread_info *btinfo,
> +			  const struct btrace_insn_iterator *it)
> +{
> +  if (it == NULL || it->function == NULL)
> +    {
> +      if (btinfo->replay == NULL)
> +	return;
> +
> +      xfree (btinfo->replay);
> +      btinfo->replay = NULL;
> +    }
> +  else
> +    {
> +      if (btinfo->replay == NULL)
> +	btinfo->replay = xzalloc (sizeof (*btinfo->replay));

xmalloc, a nitpick.


> +      else if (btrace_insn_cmp (btinfo->replay, it) == 0)
> +	return;
> +
> +      *btinfo->replay = *it;
> +    }
> +
> +  /* Clear the function call and instruction histories so we start anew
> +     from the new replay position.  */
> +  xfree (btinfo->insn_history);
> +  xfree (btinfo->call_history);
> +
> +  btinfo->insn_history = NULL;
> +  btinfo->call_history = NULL;
> +
> +  registers_changed ();
> +  reinit_frame_cache ();
> +  print_stack_frame (get_selected_frame (NULL), 1, SRC_AND_LOC);
> +}
> +
> +/* The to_goto_record_begin method of target record-btrace.  */
> +
> +static void
> +record_btrace_goto_begin (void)
> +{
> +  struct btrace_thread_info *btinfo;
> +  struct btrace_insn_iterator begin;
> +
> +  btinfo = require_btrace ();
> +
> +  btrace_insn_begin (&begin, btinfo);
> +  record_btrace_set_replay (btinfo, &begin);
> +}
> +
> +/* The to_goto_record_end method of target record-btrace.  */
> +
> +static void
> +record_btrace_goto_end (void)
> +{
> +  struct btrace_thread_info *btinfo;
> +
> +  btinfo = require_btrace ();
> +
> +  record_btrace_set_replay (btinfo, NULL);
> +}
> +
> +/* The to_goto_record method of target record-btrace.  */
> +
> +static void
> +record_btrace_goto (ULONGEST insn)
> +{
> +  struct btrace_thread_info *btinfo;
> +  struct btrace_insn_iterator it;
> +  unsigned int number;
> +  int found;
> +
> +  number = (unsigned int) insn;

Needless cast.

> +
> +  /* Check for wrap-arounds.  */
> +  if (number != insn)
> +    error (_("Instruction number out of range."));
> +
> +  btinfo = require_btrace ();
> +
> +  found = btrace_find_insn_by_number (&it, btinfo, number);
> +  if (found == 0)
> +    error (_("No such instruction."));
> +
> +  record_btrace_set_replay (btinfo, &it);
> +}
> +
>  /* Initialize the record-btrace target ops.  */
>  
>  static void
> @@ -1058,6 +1146,9 @@ init_record_btrace_ops (void)
>    ops->to_resume = record_btrace_resume;
>    ops->to_wait = record_btrace_wait;
>    ops->to_find_new_threads = record_btrace_find_new_threads;
> +  ops->to_goto_record_begin = record_btrace_goto_begin;
> +  ops->to_goto_record_end = record_btrace_goto_end;
> +  ops->to_goto_record = record_btrace_goto;
>    ops->to_stratum = record_stratum;
>    ops->to_magic = OPS_MAGIC;
>  }
> diff --git a/gdb/testsuite/gdb.btrace/Makefile.in b/gdb/testsuite/gdb.btrace/Makefile.in
> index 5c70700..aa2820a 100644
> --- a/gdb/testsuite/gdb.btrace/Makefile.in
> +++ b/gdb/testsuite/gdb.btrace/Makefile.in
> @@ -2,7 +2,7 @@ VPATH = @srcdir@
>  srcdir = @srcdir@
>  
>  EXECUTABLES   = enable function_call_history instruction_history tailcall \
> -  exception
> +  exception record_goto
>  
>  MISCELLANEOUS =
>  
> diff --git a/gdb/testsuite/gdb.btrace/record_goto.c b/gdb/testsuite/gdb.btrace/record_goto.c
> new file mode 100644
> index 0000000..1250708
> --- /dev/null
> +++ b/gdb/testsuite/gdb.btrace/record_goto.c
> @@ -0,0 +1,51 @@
> +/* This testcase is part of GDB, the GNU debugger.
> +
> +   Copyright 2013 Free Software Foundation, Inc.
> +
> +   Contributed by Intel Corp. <markus.t.metzger@intel.com>
> +
> +   This program is free software; you can redistribute it and/or modify
> +   it under the terms of the GNU General Public License as published by
> +   the Free Software Foundation; either version 3 of the License, or
> +   (at your option) any later version.
> +
> +   This program is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +   GNU General Public License for more details.
> +
> +   You should have received a copy of the GNU General Public License
> +   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
> +
> +void
> +fun1 (void)
> +{
> +}
> +
> +void
> +fun2 (void)
> +{
> +  fun1 ();
> +}
> +
> +void
> +fun3 (void)
> +{
> +  fun1 ();
> +  fun2 ();
> +}
> +
> +void
> +fun4 (void)
> +{
> +  fun1 ();
> +  fun2 ();
> +  fun3 ();
> +}
> +
> +int
> +main (void)
> +{
> +  fun4 ();
> +  return 0;
> +}
> diff --git a/gdb/testsuite/gdb.btrace/record_goto.exp b/gdb/testsuite/gdb.btrace/record_goto.exp
> new file mode 100644
> index 0000000..a9f9a64
> --- /dev/null
> +++ b/gdb/testsuite/gdb.btrace/record_goto.exp
> @@ -0,0 +1,152 @@
> +# This testcase is part of GDB, the GNU debugger.
> +#
> +# Copyright 2013 Free Software Foundation, Inc.
> +#
> +# Contributed by Intel Corp. <markus.t.metzger@intel.com>
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +
> +# check for btrace support
> +if { [skip_btrace_tests] } { return -1 }
> +
> +# start inferior
> +standard_testfile x86-record_goto.S
> +if [prepare_for_testing record_goto.exp $testfile $srcfile] {
> +    return -1
> +}

Similar arch protection + COMPILE=1 option as suggested in one of the previous
mails.


> +if ![runto_main] {
> +    return -1
> +}
> +
> +# we want a small context sizes to simplify the test
> +gdb_test_no_output "set record instruction-history-size 3"
> +gdb_test_no_output "set record function-call-history-size 3"
> +
> +# trace the call to the test function
> +gdb_test_no_output "record btrace"
> +gdb_test "next"
> +
> +# start by listing all functions
> +gdb_test "record function-call-history /ci 1, +20" "
> +1\t  fun4\tinst 1,3\r
> +2\t    fun1\tinst 4,7\r
> +3\t  fun4\tinst 8,8\r
> +4\t    fun2\tinst 9,11\r
> +5\t      fun1\tinst 12,15\r
> +6\t    fun2\tinst 16,17\r
> +7\t  fun4\tinst 18,18\r
> +8\t    fun3\tinst 19,21\r
> +9\t      fun1\tinst 22,25\r
> +10\t    fun3\tinst 26,26\r
> +11\t      fun2\tinst 27,29\r
> +12\t        fun1\tinst 30,33\r
> +13\t      fun2\tinst 34,35\r
> +14\t    fun3\tinst 36,37\r
> +15\t  fun4\tinst 38,39\r" "record_goto - list all functions"
> +
> +# let's see if we can go back in history
> +gdb_test "record goto 18" "
> +.*fun4 \\(\\) at record_goto.c:43.*" "record_goto - goto 18"
> +
> +# the function call history should start at the new location
> +gdb_test "record function-call-history /ci" "
> +7\t  fun4\tinst 18,18\r
> +8\t    fun3\tinst 19,21\r
> +9\t      fun1\tinst 22,25\r" "record_goto - function-call-history from 18 forwards"
> +
> +# the instruciton history should start at the new location
> +gdb_test "record instruction-history" "
> +18.*\r
> +19.*\r
> +20.*\r" "record_goto - instruciton-history from 18 forwards"
> +
> +# let's go to another place in the history
> +gdb_test "record goto 26" "
> +.*fun3 \\(\\) at record_goto.c:35.*" "record_goto - goto 26"
> +
> +# the function call history should start at the new location
> +gdb_test "record function-call-history /ci -" "
> +8\t    fun3\tinst 19,21\r
> +9\t      fun1\tinst 22,25\r
> +10\t    fun3\tinst 26,26\r" "record_goto - function-call-history from 26 backwards"
> +
> +# the instruciton history should start at the new location
> +gdb_test "record instruction-history -" "
> +24.*\r
> +25.*\r
> +26.*\r" "record_goto - instruciton-history from 26 backwards"
> +
> +# test that we can go to the begin of the trace
> +gdb_test "record goto begin" "
> +.*fun4 \\(\\) at record_goto.c:40.*" "record_goto - goto begin"
> +
> +# check that we're filling up the context correctly
> +gdb_test "record function-call-history /ci -" "
> +1\t  fun4\tinst 1,3\r
> +2\t    fun1\tinst 4,7\r
> +3\t  fun4\tinst 8,8\r" "record_goto - function-call-history from begin backwards"
> +
> +# check that we're filling up the context correctly
> +gdb_test "record instruction-history -" "
> +1.*\r
> +2.*\r
> +3.*\r" "record_goto - instruciton-history from begin backwards"
> +
> +# we should get the exact same history from the first instruction
> +gdb_test "record goto 2" "
> +.*fun4 \\(\\) at record_goto.c:40.*" "record_goto - goto 2"
> +
> +# check that we're filling up the context correctly
> +gdb_test "record function-call-history /ci -" "
> +1\t  fun4\tinst 1,3\r
> +2\t    fun1\tinst 4,7\r
> +3\t  fun4\tinst 8,8\r" "record_goto - function-call-history from 2 backwards"
> +
> +# check that we're filling up the context correctly
> +gdb_test "record instruction-history -" "
> +1.*\r
> +2.*\r
> +3.*\r" "record_goto - instruciton-history from 2 backwards"
> +
> +# check that we can go to the end of the trace
> +gdb_test "record goto end" "
> +.*main \\(\\) at record_goto.c:50.*" "record_goto - goto end"
> +
> +# check that we're filling up the context correctly
> +gdb_test "record function-call-history /ci" "
> +13\t      fun2\tinst 34,35\r
> +14\t    fun3\tinst 36,37\r
> +15\t  fun4\tinst 38,39\r" "record_goto - function-call-history from end forwards"
> +
> +# check that we're filling up the context correctly
> +gdb_test "record instruction-history" "
> +37.*\r
> +38.*\r
> +39.*\r" "record_goto - instruciton-history from end forwards"
> +
> +# we should get the exact same history from the second to last instruction
> +gdb_test "record goto 38" "
> +.*fun4 \\(\\) at record_goto.c:44.*" "record_goto - goto 38"
> +
> +# check that we're filling up the context correctly
> +gdb_test "record function-call-history /ci" "
> +13\t      fun2\tinst 34,35\r
> +14\t    fun3\tinst 36,37\r
> +15\t  fun4\tinst 38,39\r" "record_goto - function-call-history from 38 forwards"
> +
> +# check that we're filling up the context correctly
> +gdb_test "record instruction-history" "
> +37.*\r
> +38.*\r
> +39.*\r" "record_goto - instruciton-history from 38 forwards"
> diff --git a/gdb/testsuite/gdb.btrace/x86-record_goto.S b/gdb/testsuite/gdb.btrace/x86-record_goto.S
> new file mode 100644
> index 0000000..d2e6621
> --- /dev/null
> +++ b/gdb/testsuite/gdb.btrace/x86-record_goto.S
> @@ -0,0 +1,332 @@
> +/* This testcase is part of GDB, the GNU debugger.
> +
> +   Copyright 2013 Free Software Foundation, Inc.
> +
> +   Contributed by Intel Corp. <markus.t.metzger@intel.com>
> +
> +   This program is free software; you can redistribute it and/or modify
> +   it under the terms of the GNU General Public License as published by
> +   the Free Software Foundation; either version 3 of the License, or
> +   (at your option) any later version.
> +
> +   This program is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +   GNU General Public License for more details.
> +
> +   You should have received a copy of the GNU General Public License
> +   along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +
> +
> +   This file has been generated using:
> +   gcc -S -g record_goto.c -o x86-record_goto.S  */

Again -dA is more convenient.


> +
> +	.file	"record_goto.c"
> +	.section	.debug_abbrev,"",@progbits
> +.Ldebug_abbrev0:
> +	.section	.debug_info,"",@progbits
> +.Ldebug_info0:
> +	.section	.debug_line,"",@progbits
> +.Ldebug_line0:
> +	.text
> +.Ltext0:
> +.globl fun1
> +	.type	fun1, @function
> +fun1:
> +.LFB0:
> +	.file 1 "record_goto.c"
> +	.loc 1 22 0
> +	.cfi_startproc
> +	pushq	%rbp
> +	.cfi_def_cfa_offset 16
> +	movq	%rsp, %rbp
> +	.cfi_offset 6, -16
> +	.cfi_def_cfa_register 6
> +	.loc 1 23 0
> +	leave
> +	.cfi_def_cfa 7, 8
> +	ret
> +	.cfi_endproc
> +.LFE0:
> +	.size	fun1, .-fun1
> +.globl fun2
> +	.type	fun2, @function
> +fun2:
> +.LFB1:
> +	.loc 1 27 0
> +	.cfi_startproc
> +	pushq	%rbp
> +	.cfi_def_cfa_offset 16
> +	movq	%rsp, %rbp
> +	.cfi_offset 6, -16
> +	.cfi_def_cfa_register 6
> +	.loc 1 28 0
> +	call	fun1
> +	.loc 1 29 0
> +	leave
> +	.cfi_def_cfa 7, 8
> +	ret
> +	.cfi_endproc
> +.LFE1:
> +	.size	fun2, .-fun2
> +.globl fun3
> +	.type	fun3, @function
> +fun3:
> +.LFB2:
> +	.loc 1 33 0
> +	.cfi_startproc
> +	pushq	%rbp
> +	.cfi_def_cfa_offset 16
> +	movq	%rsp, %rbp
> +	.cfi_offset 6, -16
> +	.cfi_def_cfa_register 6
> +	.loc 1 34 0
> +	call	fun1
> +	.loc 1 35 0
> +	call	fun2
> +	.loc 1 36 0
> +	leave
> +	.cfi_def_cfa 7, 8
> +	ret
> +	.cfi_endproc
> +.LFE2:
> +	.size	fun3, .-fun3
> +.globl fun4
> +	.type	fun4, @function
> +fun4:
> +.LFB3:
> +	.loc 1 40 0
> +	.cfi_startproc
> +	pushq	%rbp
> +	.cfi_def_cfa_offset 16
> +	movq	%rsp, %rbp
> +	.cfi_offset 6, -16
> +	.cfi_def_cfa_register 6
> +	.loc 1 41 0
> +	call	fun1
> +	.loc 1 42 0
> +	call	fun2
> +	.loc 1 43 0
> +	call	fun3
> +	.loc 1 44 0
> +	leave
> +	.cfi_def_cfa 7, 8
> +	ret
> +	.cfi_endproc
> +.LFE3:
> +	.size	fun4, .-fun4
> +.globl main
> +	.type	main, @function
> +main:
> +.LFB4:
> +	.loc 1 48 0
> +	.cfi_startproc
> +	pushq	%rbp
> +	.cfi_def_cfa_offset 16
> +	movq	%rsp, %rbp
> +	.cfi_offset 6, -16
> +	.cfi_def_cfa_register 6
> +	.loc 1 49 0
> +	call	fun4
> +	.loc 1 50 0
> +	movl	$0, %eax
> +	.loc 1 51 0
> +	leave
> +	.cfi_def_cfa 7, 8
> +	ret
> +	.cfi_endproc
> +.LFE4:
> +	.size	main, .-main
> +.Letext0:
> +	.section	.debug_info
> +	.long	0xbc
> +	.value	0x3
> +	.long	.Ldebug_abbrev0
> +	.byte	0x8
> +	.uleb128 0x1
> +	.long	.LASF4
> +	.byte	0x1
> +	.long	.LASF5
> +	.long	.LASF6
> +	.quad	.Ltext0
> +	.quad	.Letext0
> +	.long	.Ldebug_line0
> +	.uleb128 0x2
> +	.byte	0x1
> +	.long	.LASF0
> +	.byte	0x1
> +	.byte	0x15
> +	.byte	0x1
> +	.quad	.LFB0
> +	.quad	.LFE0
> +	.byte	0x1
> +	.byte	0x9c
> +	.uleb128 0x2
> +	.byte	0x1
> +	.long	.LASF1
> +	.byte	0x1
> +	.byte	0x1a
> +	.byte	0x1
> +	.quad	.LFB1
> +	.quad	.LFE1
> +	.byte	0x1
> +	.byte	0x9c
> +	.uleb128 0x2
> +	.byte	0x1
> +	.long	.LASF2
> +	.byte	0x1
> +	.byte	0x20
> +	.byte	0x1
> +	.quad	.LFB2
> +	.quad	.LFE2
> +	.byte	0x1
> +	.byte	0x9c
> +	.uleb128 0x2
> +	.byte	0x1
> +	.long	.LASF3
> +	.byte	0x1
> +	.byte	0x27
> +	.byte	0x1
> +	.quad	.LFB3
> +	.quad	.LFE3
> +	.byte	0x1
> +	.byte	0x9c
> +	.uleb128 0x3
> +	.byte	0x1
> +	.long	.LASF7
> +	.byte	0x1
> +	.byte	0x2f
> +	.byte	0x1
> +	.long	0xb8
> +	.quad	.LFB4
> +	.quad	.LFE4
> +	.byte	0x1
> +	.byte	0x9c
> +	.uleb128 0x4
> +	.byte	0x4
> +	.byte	0x5
> +	.string	"int"
> +	.byte	0x0
> +	.section	.debug_abbrev
> +	.uleb128 0x1
> +	.uleb128 0x11
> +	.byte	0x1
> +	.uleb128 0x25
> +	.uleb128 0xe
> +	.uleb128 0x13
> +	.uleb128 0xb
> +	.uleb128 0x3
> +	.uleb128 0xe
> +	.uleb128 0x1b
> +	.uleb128 0xe
> +	.uleb128 0x11
> +	.uleb128 0x1
> +	.uleb128 0x12
> +	.uleb128 0x1
> +	.uleb128 0x10
> +	.uleb128 0x6
> +	.byte	0x0
> +	.byte	0x0
> +	.uleb128 0x2
> +	.uleb128 0x2e
> +	.byte	0x0
> +	.uleb128 0x3f
> +	.uleb128 0xc
> +	.uleb128 0x3
> +	.uleb128 0xe
> +	.uleb128 0x3a
> +	.uleb128 0xb
> +	.uleb128 0x3b
> +	.uleb128 0xb
> +	.uleb128 0x27
> +	.uleb128 0xc
> +	.uleb128 0x11
> +	.uleb128 0x1
> +	.uleb128 0x12
> +	.uleb128 0x1
> +	.uleb128 0x40
> +	.uleb128 0xa
> +	.byte	0x0
> +	.byte	0x0
> +	.uleb128 0x3
> +	.uleb128 0x2e
> +	.byte	0x0
> +	.uleb128 0x3f
> +	.uleb128 0xc
> +	.uleb128 0x3
> +	.uleb128 0xe
> +	.uleb128 0x3a
> +	.uleb128 0xb
> +	.uleb128 0x3b
> +	.uleb128 0xb
> +	.uleb128 0x27
> +	.uleb128 0xc
> +	.uleb128 0x49
> +	.uleb128 0x13
> +	.uleb128 0x11
> +	.uleb128 0x1
> +	.uleb128 0x12
> +	.uleb128 0x1
> +	.uleb128 0x40
> +	.uleb128 0xa
> +	.byte	0x0
> +	.byte	0x0
> +	.uleb128 0x4
> +	.uleb128 0x24
> +	.byte	0x0
> +	.uleb128 0xb
> +	.uleb128 0xb
> +	.uleb128 0x3e
> +	.uleb128 0xb
> +	.uleb128 0x3
> +	.uleb128 0x8
> +	.byte	0x0
> +	.byte	0x0
> +	.byte	0x0
> +	.section	.debug_pubnames,"",@progbits
> +	.long	0x3b
> +	.value	0x2
> +	.long	.Ldebug_info0
> +	.long	0xc0
> +	.long	0x2d
> +	.string	"fun1"
> +	.long	0x48
> +	.string	"fun2"
> +	.long	0x63
> +	.string	"fun3"
> +	.long	0x7e
> +	.string	"fun4"
> +	.long	0x99
> +	.string	"main"
> +	.long	0x0
> +	.section	.debug_aranges,"",@progbits
> +	.long	0x2c
> +	.value	0x2
> +	.long	.Ldebug_info0
> +	.byte	0x8
> +	.byte	0x0
> +	.value	0x0
> +	.value	0x0
> +	.quad	.Ltext0
> +	.quad	.Letext0-.Ltext0
> +	.quad	0x0
> +	.quad	0x0
> +	.section	.debug_str,"MS",@progbits,1
> +.LASF3:
> +	.string	"fun4"
> +.LASF5:
> +	.string	"record_goto.c"
> +.LASF4:
> +	.string	"GNU C 4.4.4 20100726 (Red Hat 4.4.4-13)"
> +.LASF7:
> +	.string	"main"
> +.LASF1:
> +	.string	"fun2"
> +.LASF0:
> +	.string	"fun1"
> +.LASF6:
> +	.string	"/users/mmetzger/gdb/gerrit/git/gdb/testsuite/gdb.btrace"

Again better to just put there "".


> +.LASF2:
> +	.string	"fun3"
> +	.ident	"GCC: (GNU) 4.4.4 20100726 (Red Hat 4.4.4-13)"
> +	.section	.note.GNU-stack,"",@progbits
> -- 
> 1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 14/24] record-btrace: provide xfer_partial target method
  2013-07-03  9:14 ` [patch v4 14/24] record-btrace: provide xfer_partial target method Markus Metzger
@ 2013-08-18 19:08   ` Jan Kratochvil
  2013-09-16  9:30     ` Metzger, Markus T
  0 siblings, 1 reply; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:08 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches

On Wed, 03 Jul 2013 11:14:24 +0200, Markus Metzger wrote:
> Provide the xfer_partial target method for the btrace record target.
> 
> Only allow memory accesses to readonly memory while we're replaying.
> 
> 2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>
> 
> 	* record-btrace.c (record_btrace_xfer_partial): New.
> 	(init_record_btrace_ops): Initialize xfer_partial.
> 
> 
> ---
>  gdb/record-btrace.c |   58 +++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 files changed, 58 insertions(+), 0 deletions(-)
> 
> diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
> index cb1f3bb..831a367 100644
> --- a/gdb/record-btrace.c
> +++ b/gdb/record-btrace.c
> @@ -754,6 +754,63 @@ record_btrace_is_replaying (void)
>    return 0;
>  }
>  
> +/* The to_xfer_partial method of target record-btrace.  */
> +
> +static LONGEST
> +record_btrace_xfer_partial (struct target_ops *ops, enum target_object object,
> +			    const char *annex, gdb_byte *readbuf,
> +			    const gdb_byte *writebuf, ULONGEST offset,
> +			    LONGEST len)
> +{
> +  struct target_ops *t;
> +
> +  /* Normalize the request so len is positive.  */
> +  if (len < 0)
> +    {
> +      offset += len;
> +      len = - len;
> +    }

I do not see LEN could be < 0, do you?  Use just:
  gdb_assetr (len >= 0);
(It even should never be LEN == 0 but that may not be guaranteed.)


> +
> +  /* Filter out requests that don't make sense during replay.  */
> +  if (record_btrace_is_replaying ())
> +    {
> +      switch (object)
> +	{
> +	case TARGET_OBJECT_MEMORY:
> +	case TARGET_OBJECT_RAW_MEMORY:
> +	case TARGET_OBJECT_STACK_MEMORY:
> +	  {
> +	    /* We allow reading readonly memory.  */
> +	    struct target_section *section;
> +
> +	    section = target_section_by_addr (ops, offset);
> +	    if (section != NULL)
> +	      {
> +		/* Check if the section we found is readonly.  */
> +		if ((bfd_get_section_flags (section->bfd,
> +					    section->the_bfd_section)
> +		     & SEC_READONLY) != 0)
> +		  {
> +		    /* Truncate the request to fit into this section.  */
> +		    len = min (len, section->endaddr - offset);
> +		    break;
> +		  }
> +	      }
> +
> +	    return -1;
> +	  }
> +	}
> +    }
> +
> +  /* Forward the request.  */
> +  for (t = ops->beneath; t != NULL; t = t->beneath)
> +    if (t->to_xfer_partial != NULL)
> +      return t->to_xfer_partial (t, object, annex, readbuf, writebuf,
> +				 offset, len);
> +
> +  return -1;
> +}
> +
>  /* The to_fetch_registers method of target record-btrace.  */
>  
>  static void
> @@ -936,6 +993,7 @@ init_record_btrace_ops (void)
>    ops->to_call_history_from = record_btrace_call_history_from;
>    ops->to_call_history_range = record_btrace_call_history_range;
>    ops->to_record_is_replaying = record_btrace_is_replaying;
> +  ops->to_xfer_partial = record_btrace_xfer_partial;
>    ops->to_fetch_registers = record_btrace_fetch_registers;
>    ops->to_store_registers = record_btrace_store_registers;
>    ops->to_prepare_to_store = record_btrace_prepare_to_store;
> -- 
> 1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 15/24] record-btrace: add to_wait and to_resume target methods.
  2013-07-03  9:15 ` [patch v4 15/24] record-btrace: add to_wait and to_resume target methods Markus Metzger
@ 2013-08-18 19:08   ` Jan Kratochvil
  0 siblings, 0 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:08 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches

On Wed, 03 Jul 2013 11:14:25 +0200, Markus Metzger wrote:
> Add simple to_wait and to_resume target methods that prevent stepping when the
> current replay position is not at the end of the execution log.
> 
> 2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>
> 
> 	* record-btrace.c (record_btrace_resume): New.
> 	(record_btrace_wait): New.
> 	(init_record_btrace_ops): Initialize to_wait and to_resume.
> 
> 
> ---
>  gdb/record-btrace.c |   41 +++++++++++++++++++++++++++++++++++++++++
>  1 files changed, 41 insertions(+), 0 deletions(-)
> 
> diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
> index 831a367..430296a 100644
> --- a/gdb/record-btrace.c
> +++ b/gdb/record-btrace.c
> @@ -966,6 +966,45 @@ static const struct frame_unwind record_btrace_frame_unwind =
>    NULL,
>    record_btrace_frame_sniffer
>  };
> +
> +/* The to_resume method of target record-btrace.  */
> +
> +static void
> +record_btrace_resume (struct target_ops *ops, ptid_t ptid, int step,
> +		      enum gdb_signal signal)
> +{
> +  /* As long as we're not replaying, just forward the request.  */
> +  if (!record_btrace_is_replaying ())
> +    {
> +      for (ops = ops->beneath; ops != NULL; ops = ops->beneath)
> +	if (ops->to_resume != NULL)
> +	  return ops->to_resume (ops, ptid, step, signal);
> +
> +      error (_("Cannot find target for stepping."));
> +    }
> +
> +  error (_("You can't do this from here.  Do 'record goto end', first."));
> +}
> +
> +/* The to_wait method of target record-btrace.  */
> +
> +static ptid_t
> +record_btrace_wait (struct target_ops *ops, ptid_t ptid,
> +		    struct target_waitstatus *status, int options)
> +{
> +  /* As long as we're not replaying, just forward the request.  */
> +  if (!record_btrace_is_replaying ())
> +    {
> +      for (ops = ops->beneath; ops != NULL; ops = ops->beneath)
> +	if (ops->to_wait != NULL)
> +	  return ops->to_wait (ops, ptid, status, options);
> +
> +      error (_("Cannot find target for stepping."));

"for waiting".

target_wait (and target_resume) call just noprocess () in such case although
I understand this is a different case as btrace target should always have some
live target underneath.  Just a statement, not a request for change.


> +    }
> +
> +  error (_("You can't do this from here.  Do 'record goto end', first."));
> +}
> +
>  /* Initialize the record-btrace target ops.  */
>  
>  static void
> @@ -998,6 +1037,8 @@ init_record_btrace_ops (void)
>    ops->to_store_registers = record_btrace_store_registers;
>    ops->to_prepare_to_store = record_btrace_prepare_to_store;
>    ops->to_get_unwinder = &record_btrace_frame_unwind;
> +  ops->to_resume = record_btrace_resume;
> +  ops->to_wait = record_btrace_wait;
>    ops->to_stratum = record_stratum;
>    ops->to_magic = OPS_MAGIC;
>  }
> -- 
> 1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 20/24] btrace, gdbserver: read branch trace incrementally
  2013-07-03  9:14 ` [patch v4 20/24] btrace, gdbserver: read branch trace incrementally Markus Metzger
@ 2013-08-18 19:09   ` Jan Kratochvil
  2013-09-16 12:48     ` Metzger, Markus T
  0 siblings, 1 reply; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:09 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches, Pedro Alves

On Wed, 03 Jul 2013 11:14:30 +0200, Markus Metzger wrote:
> Read branch trace data incrementally and extend the current trace rather than
> discarding it and reading the entire trace buffer each time.
> 
> If the branch trace buffer overflowed, we can't extend the current trace so we
> discard it and start anew by reading the entire branch trace buffer.
> 
> Reviewed-by: Eli Zaretskii  <eliz@gnu.org>
> CC: Pedro Alves  <palves@redhat.com>
> 2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>
> 
> 	* common/linux-btrace.c (perf_event_read_bts, linux_read_btrace):
> 	Support delta reads.
> 	* common/linux-btrace.h (linux_read_btrace): Change parameters
> 	and return type to allow error reporting.
> 	* common/btrace-common.h (btrace_read_type)<btrace_read_delta>:
> 	New.
> 	* btrace.c (btrace_compute_ftrace): Start from the end of
> 	the current trace.
> 	(btrace_stitch_trace, btrace_clear_history): New.
> 	(btrace_fetch): Read delta trace.
> 	(btrace_clear): Move clear history code to btrace_clear_history.
> 	(parse_xml_btrace): Throw an error if parsing failed.
> 	* target.h (struct target_ops)<to_read_btrace>: Change parameters
> 	and return type to allow error reporting.
> 	(target_read_btrace): Change parameters and return type to allow
> 	error reporting.
> 	* target.c (target_read_btrace): Update.
> 	* remote.c (remote_read_btrace): Support delta reads.  Pass
> 	errors on.
> 
> gdbserver/
> 	* target.h (target_ops)<read_btrace>: Change parameters and
> 	return type to allow error reporting.
> 	* server.c (handle_qxfer_btrace): Support delta reads.  Pass
> 	trace reading errors on.
> 	* linux-low.c (linux_low_read_btrace): Pass trace reading
> 	errors on.
> 
> 
> ---
>  gdb/NEWS                   |    4 +
>  gdb/btrace.c               |  136 ++++++++++++++++++++++++++++++++++++++------
>  gdb/common/btrace-common.h |    6 ++-
>  gdb/common/linux-btrace.c  |   84 +++++++++++++++++++--------
>  gdb/common/linux-btrace.h  |    5 +-
>  gdb/doc/gdb.texinfo        |    8 +++
>  gdb/gdbserver/linux-low.c  |   18 +++++-
>  gdb/gdbserver/server.c     |   11 +++-
>  gdb/gdbserver/target.h     |    6 +-
>  gdb/remote.c               |   23 ++++---
>  gdb/target.c               |    9 ++-
>  gdb/target.h               |   14 +++--
>  12 files changed, 254 insertions(+), 70 deletions(-)
> 
> diff --git a/gdb/NEWS b/gdb/NEWS
> index 9b9de71..433a968 100644
> --- a/gdb/NEWS
> +++ b/gdb/NEWS
> @@ -124,6 +124,10 @@ qXfer:libraries-svr4:read's annex
>    necessary for library list updating, resulting in significant
>    speedup.
>  
> +qXfer:btrace:read's annex
> +  The qXfer:btrace:read packet supports a new annex 'delta' to read
> +  branch trace incrementally.
> +
>  * New features in the GDB remote stub, GDBserver
>  
>    ** GDBserver now supports target-assisted range stepping.  Currently
> diff --git a/gdb/btrace.c b/gdb/btrace.c
> index 822926c..072e9d3 100644
> --- a/gdb/btrace.c
> +++ b/gdb/btrace.c
> @@ -600,9 +600,9 @@ btrace_compute_ftrace (struct btrace_thread_info *btinfo,
>    DEBUG ("compute ftrace");
>  
>    gdbarch = target_gdbarch ();
> -  begin = NULL;
> -  end = NULL;
> -  level = INT_MAX;
> +  begin = btinfo->begin;
> +  end = btinfo->end;
> +  level = begin != NULL ? -btinfo->level : INT_MAX;
>    blk = VEC_length (btrace_block_s, btrace);
>  
>    while (blk != 0)
> @@ -718,27 +718,138 @@ btrace_teardown (struct thread_info *tp)
>    btrace_clear (tp);
>  }
>  
> +/* Adjust the block trace in order to stitch old and new trace together.
> +   Return 0 on success; -1, otherwise.  */

Isn't it a typo?
  Return 0 on success, -1 otherwise.  */

It took me a while to realize BTRACE is _reversed_.  Please document it
everywhere, such as btrace_compute_ftrace, target_read_btrace,
btrace_stitch_trace, to_read_btrace, read_btrace and maybe some others.
Also gdb.texinfo does not talk about the XML file order so one assumes the
forward/chronological one but XML <block/> records are also in
reverse-chronological order.


> +
> +static int
> +btrace_stitch_trace (VEC (btrace_block_s) **btrace,
> +		     const struct btrace_thread_info *btinfo)
> +{
> +  struct btrace_function *end;
> +  struct btrace_insn *insn;
> +  btrace_block_s *block;
> +
> +  /* If we don't have trace, there's nothing to do.  */
> +  if (VEC_empty (btrace_block_s, *btrace))
> +    return 0;
> +
> +  end = btinfo->end;
> +  gdb_assert (end != NULL);
> +
> +  block = VEC_last (btrace_block_s, *btrace);
> +  insn = VEC_last (btrace_insn_s, end->insn);

style:
At least call block and insn somehow specific from where they come from.
Maybe btrace_block and btinfo_end.  Also end should be called btinfo_end (if
the extra variable still makes sense in such case).

I would even call it new_btrace and old_btinfo with variables old_end etc.


> +
> +  /* Check if we can extend the trace.  */
> +  if (block->end < insn->pc)
> +    return -1;

Why < and not != ?


> +
> +  /* If the current PC at the end of the block is the same as in our current
> +     trace, there are two explanations:
> +       1. we executed the instruction and some branch brought us back.
> +       2. we have not made any progress.
> +     In the first case, the delta trace vector should contain at least two
> +     entries.
> +     In the second case, the delta trace vector should contain exactly one
> +     entry for the partial block containing the current PC.  Remove it.  */
> +  if (block->end == insn->pc && VEC_length (btrace_block_s, *btrace) == 1)
> +    {
> +      VEC_pop (btrace_block_s, *btrace);
> +      return 0;
> +    }
> +
> +  DEBUG ("stitching %s to %s", ftrace_print_insn_addr (insn),
> +	 core_addr_to_string_nz (block->end));
> +
> +  /* We adjust the last block to start at the end of our current trace.  */
> +  gdb_assert (block->begin == 0);

It is commented in perf_event_read_bts but the this patch introduces the
special value 0 for BEGIN so it should be commented (also) in
btrace_block::begin.


> +  block->begin = insn->pc;
> +
> +  /* We simply pop the last insn so we can insert it again as part of
> +     the normal branch trace computation.
> +     Since instruction iterators are based on indices in the instructions
> +     vector, we don't leave any pointers dangling.  */
> +  DEBUG ("pruning insn at %s for stitching", ftrace_print_insn_addr (insn));
> +
> +  VEC_pop (btrace_insn_s, end->insn);
> +
> +  /* The instructions vector may become empty temporarily if this has
> +     been the only instruction in this function segment.
> +     This violates the invariant but will be remedied shortly.  */
> +  return 0;
> +}
> +
> +/* Clear the branch trace histories in BTINFO.  */
> +
> +static void
> +btrace_clear_history (struct btrace_thread_info *btinfo)
> +{
> +  xfree (btinfo->insn_history);
> +  xfree (btinfo->call_history);
> +  xfree (btinfo->replay);
> +
> +  btinfo->insn_history = NULL;
> +  btinfo->call_history = NULL;
> +  btinfo->replay = NULL;
> +}
> +
>  /* See btrace.h.  */
>  
>  void
>  btrace_fetch (struct thread_info *tp)
>  {
>    struct btrace_thread_info *btinfo;
> +  struct btrace_target_info *tinfo;
>    VEC (btrace_block_s) *btrace;
>    struct cleanup *cleanup;
> +  int errcode;
>  
>    DEBUG ("fetch thread %d (%s)", tp->num, target_pid_to_str (tp->ptid));
>  
> +  btrace = NULL;
>    btinfo = &tp->btrace;
> -  if (btinfo->target == NULL)
> +  tinfo = btinfo->target;
> +  if (tinfo == NULL)
>      return;
>  
> -  btrace = target_read_btrace (btinfo->target, btrace_read_new);
>    cleanup = make_cleanup (VEC_cleanup (btrace_block_s), &btrace);
>  
> +  /* Let's first try to extend the trace we already have.  */
> +  if (btinfo->end != NULL)
> +    {
> +      errcode = target_read_btrace (&btrace, tinfo, btrace_read_delta);
> +      if (errcode == 0)
> +	{
> +	  /* Success.  Let's try to stitch the traces together.  */
> +	  errcode = btrace_stitch_trace (&btrace, btinfo);
> +	}
> +      else
> +	{
> +	  /* We failed to read delta trace.  Let's try to read new trace.  */
> +	  errcode = target_read_btrace (&btrace, tinfo, btrace_read_new);
> +
> +	  /* If we got any new trace, discard what we have.  */
> +	  if (errcode == 0 && !VEC_empty (btrace_block_s, btrace))
> +	    btrace_clear (tp);
> +	}
> +
> +      /* If we were not able to read the trace, we start over.  */
> +      if (errcode != 0)
> +	{
> +	  btrace_clear (tp);
> +	  errcode = target_read_btrace (&btrace, tinfo, btrace_read_all);
> +	}
> +    }
> +  else
> +    errcode = target_read_btrace (&btrace, tinfo, btrace_read_all);
> +
> +  /* If we were not able to read the branch trace, signal an error.  */
> +  if (errcode != 0)
> +    error ("Failed to read branch trace.");

  error (_("Failed to read branch trace."));


> +
> +  /* Compute the trace, provided we have any.  */
>    if (!VEC_empty (btrace_block_s, btrace))
>      {
> -      btrace_clear (tp);
> +      btrace_clear_history (btinfo);
>        btrace_compute_ftrace (btinfo, btrace);
>      }
>  
> @@ -773,13 +884,7 @@ btrace_clear (struct thread_info *tp)
>    btinfo->begin = NULL;
>    btinfo->end = NULL;
>  
> -  xfree (btinfo->insn_history);
> -  xfree (btinfo->call_history);
> -  xfree (btinfo->replay);
> -
> -  btinfo->insn_history = NULL;
> -  btinfo->call_history = NULL;
> -  btinfo->replay = NULL;
> +  btrace_clear_history (btinfo);
>  }
>  
>  /* See btrace.h.  */
> @@ -871,10 +976,7 @@ parse_xml_btrace (const char *buffer)
>    errcode = gdb_xml_parse_quick (_("btrace"), "btrace.dtd", btrace_elements,
>  				 buffer, &btrace);
>    if (errcode != 0)
> -    {
> -      do_cleanups (cleanup);
> -      return NULL;
> -    }
> +    error (_("Error parsing branch trace."));
>  
>    /* Keep parse results.  */
>    discard_cleanups (cleanup);
> diff --git a/gdb/common/btrace-common.h b/gdb/common/btrace-common.h
> index b157c7c..e863a65 100644
> --- a/gdb/common/btrace-common.h
> +++ b/gdb/common/btrace-common.h
> @@ -67,7 +67,11 @@ enum btrace_read_type
>    btrace_read_all,
>  
>    /* Send all available trace, if it changed.  */
> -  btrace_read_new
> +  btrace_read_new,
> +
> +  /* Send the trace since the last request.  This will fail if the trace
> +     buffer overflowed.  */
> +  btrace_read_delta
>  };
>  
>  #endif /* BTRACE_COMMON_H */
> diff --git a/gdb/common/linux-btrace.c b/gdb/common/linux-btrace.c
> index b30a6ec..649b535 100644
> --- a/gdb/common/linux-btrace.c
> +++ b/gdb/common/linux-btrace.c
> @@ -169,11 +169,11 @@ perf_event_sample_ok (const struct perf_event_sample *sample)
>  
>  static VEC (btrace_block_s) *
>  perf_event_read_bts (struct btrace_target_info* tinfo, const uint8_t *begin,
> -		     const uint8_t *end, const uint8_t *start)
> +		     const uint8_t *end, const uint8_t *start, size_t size)
>  {
>    VEC (btrace_block_s) *btrace = NULL;
>    struct perf_event_sample sample;
> -  size_t read = 0, size = (end - begin);
> +  size_t read = 0;
>    struct btrace_block block = { 0, 0 };
>    struct regcache *regcache;
>  
> @@ -249,6 +249,12 @@ perf_event_read_bts (struct btrace_target_info* tinfo, const uint8_t *begin,
>        block.end = psample->bts.from;
>      }
>  
> +  /* Push the last block, as well.  We don't know where it ends, but we

  /* Push the last block (the first one of inferior execution), as well.  [...]


> +     know where it starts.  If we're reading delta trace, we can fill in the
> +     start address later on.  Otherwise, we will prune it.  */
> +  block.begin = 0;
> +  VEC_safe_push (btrace_block_s, btrace, &block);
> +
>    return btrace;
>  }
>  
> @@ -501,21 +507,24 @@ linux_btrace_has_changed (struct btrace_target_info *tinfo)
>  
>  /* See linux-btrace.h.  */
>  
> -VEC (btrace_block_s) *
> -linux_read_btrace (struct btrace_target_info *tinfo,
> +int
> +linux_read_btrace (VEC (btrace_block_s) **btrace,
> +		   struct btrace_target_info *tinfo,
>  		   enum btrace_read_type type)
>  {
> -  VEC (btrace_block_s) *btrace = NULL;
>    volatile struct perf_event_mmap_page *header;
>    const uint8_t *begin, *end, *start;
> -  unsigned long data_head, retries = 5;
> -  size_t buffer_size;
> +  unsigned long data_head, data_tail, retries = 5;
> +  size_t buffer_size, size;
>  
> +  /* For delta reads, we return at least the partial last block containing
> +     the current PC.  */
>    if (type == btrace_read_new && !linux_btrace_has_changed (tinfo))
> -    return NULL;
> +    return 0;

It relies here that caller has set *BTRACE to NULL before calling this
function.  It would be better to set it here in the callee and remove the
"*btrace = NULL;" statements from the callers.


>  
>    header = perf_event_header (tinfo);
>    buffer_size = perf_event_buffer_size (tinfo);
> +  data_tail = tinfo->data_head;
>  
>    /* We may need to retry reading the trace.  See below.  */
>    while (retries--)
> @@ -523,23 +532,45 @@ linux_read_btrace (struct btrace_target_info *tinfo,
>        data_head = header->data_head;
>  
>        /* Delete any leftover trace from the previous iteration.  */
> -      VEC_truncate (btrace_block_s, btrace, 0);
> +      VEC_truncate (btrace_block_s, *btrace, 0);
>  
> -      /* If there's new trace, let's read it.  */
> -      if (data_head != tinfo->data_head)
> +      if (type == btrace_read_delta)
>  	{
> -	  /* Data_head keeps growing; the buffer itself is circular.  */
> -	  begin = perf_event_buffer_begin (tinfo);
> -	  start = begin + data_head % buffer_size;
> -
> -	  if (data_head <= buffer_size)
> -	    end = start;
> -	  else
> -	    end = perf_event_buffer_end (tinfo);
> +	  /* Determine the number of bytes to read and check for buffer
> +	     overflows.  */
> +
> +	  /* Check for data head overflows.  We might be able to recover from
> +	     those but they are very unlikely and it's not really worth the
> +	     effort, I think.  */
> +	  if (data_head < data_tail)
> +	    return -EOVERFLOW;
> +
> +	  /* If the buffer is smaller than the trace delta, we overflowed.  */
> +	  size = data_head - data_tail;
> +	  if (buffer_size < size)
> +	    return -EOVERFLOW;
> +	}
> +      else
> +	{
> +	  /* Read the entire buffer.  */
> +	  size = buffer_size;
>  
> -	  btrace = perf_event_read_bts (tinfo, begin, end, start);
> +	  /* Adjust the size if the buffer has not overflowed, yet.  */
> +	  if (data_head < size)
> +	    size = data_head;
>  	}
>  
> +      /* Data_head keeps growing; the buffer itself is circular.  */
> +      begin = perf_event_buffer_begin (tinfo);
> +      start = begin + data_head % buffer_size;
> +
> +      if (data_head <= buffer_size)
> +	end = start;
> +      else
> +	end = perf_event_buffer_end (tinfo);
> +
> +      *btrace = perf_event_read_bts (tinfo, begin, end, start, size);
> +
>        /* The stopping thread notifies its ptracer before it is scheduled out.
>  	 On multi-core systems, the debugger might therefore run while the
>  	 kernel might be writing the last branch trace records.
> @@ -551,7 +582,11 @@ linux_read_btrace (struct btrace_target_info *tinfo,
>  
>    tinfo->data_head = data_head;
>  
> -  return btrace;
> +  /* Prune the incomplete last block if we're not doing a delta read.  */

  /* Prune the incomplete last block (the first one of inferior execution) if [...]
     There is no way to fill in its zeroed BEGIN element.  */


> +  if (!VEC_empty (btrace_block_s, *btrace) && type != btrace_read_delta)
> +    VEC_pop (btrace_block_s, *btrace);
> +
> +  return 0;
>  }
>  
>  #else /* !HAVE_LINUX_PERF_EVENT_H */
> @@ -582,11 +617,12 @@ linux_disable_btrace (struct btrace_target_info *tinfo)
>  
>  /* See linux-btrace.h.  */
>  
> -VEC (btrace_block_s) *
> -linux_read_btrace (struct btrace_target_info *tinfo,
> +int
> +linux_read_btrace (VEC (btrace_block_s) **btrace,
> +		   struct btrace_target_info *tinfo,
>  		   enum btrace_read_type type)
>  {
> -  return NULL;
> +  return ENOSYS;

You return -EOVERFLOW in its real implementation while ENOSYS here, its sign
does not match (+it is not documented).  linux_low_read_btrace checks for
-EOVERFLOW.


>  }
>  
>  #endif /* !HAVE_LINUX_PERF_EVENT_H */
> diff --git a/gdb/common/linux-btrace.h b/gdb/common/linux-btrace.h
> index d4e8402..82397b7 100644
> --- a/gdb/common/linux-btrace.h
> +++ b/gdb/common/linux-btrace.h
> @@ -71,7 +71,8 @@ extern struct btrace_target_info *linux_enable_btrace (ptid_t ptid);
>  extern int linux_disable_btrace (struct btrace_target_info *tinfo);
>  
>  /* Read branch trace data.  */

You should name all the parameters and explain them, such as the first one is
return-value parameter.  You should also describe the return value.


> -extern VEC (btrace_block_s) *linux_read_btrace (struct btrace_target_info *,
> -						enum btrace_read_type);
> +extern int linux_read_btrace (VEC (btrace_block_s) **,
> +			      struct btrace_target_info *,
> +			      enum btrace_read_type);
>  
>  #endif /* LINUX_BTRACE_H */
> diff --git a/gdb/doc/gdb.texinfo b/gdb/doc/gdb.texinfo
> index eb4896f..2dc45bc 100644
> --- a/gdb/doc/gdb.texinfo
> +++ b/gdb/doc/gdb.texinfo
> @@ -39161,6 +39161,14 @@ Returns all available branch trace.
>  @item new
>  Returns all available branch trace if the branch trace changed since
>  the last read request.
> +
> +@item delta
> +Returns the new branch trace since the last read request.  Adds a new
> +block to the end of the trace that begins at zero and ends at the source
> +location of the first branch in the trace buffer.  This extra block is
> +used to stitch traces together.
> +
> +If the trace buffer overflowed, returns an error indicating the overflow.
>  @end table
>  
>  This packet is not probed by default; the remote stub must request it
> diff --git a/gdb/gdbserver/linux-low.c b/gdb/gdbserver/linux-low.c
> index 47ea76d..709405c 100644
> --- a/gdb/gdbserver/linux-low.c
> +++ b/gdb/gdbserver/linux-low.c
> @@ -5964,15 +5964,25 @@ linux_low_enable_btrace (ptid_t ptid)
>  
>  /* Read branch trace data as btrace xml document.  */

Make a reference to the target_ops.read_btrace field here which for example
describes the return value.


>  
> -static void
> +static int
>  linux_low_read_btrace (struct btrace_target_info *tinfo, struct buffer *buffer,
>  		       int type)
>  {
>    VEC (btrace_block_s) *btrace;
>    struct btrace_block *block;
> -  int i;
> +  int i, errcode;
> +
> +  btrace = NULL;
> +  errcode = linux_read_btrace (&btrace, tinfo, type);
> +  if (errcode != 0)
> +    {
> +      if (errcode == -EOVERFLOW)
> +	buffer_grow_str (buffer, "E.Overflow.");
> +      else
> +	buffer_grow_str (buffer, "E.Generic Error.");
>  
> -  btrace = linux_read_btrace (tinfo, type);
> +      return -1;
> +    }
>  
>    buffer_grow_str (buffer, "<!DOCTYPE btrace SYSTEM \"btrace.dtd\">\n");
>    buffer_grow_str (buffer, "<btrace version=\"1.0\">\n");
> @@ -5984,6 +5994,8 @@ linux_low_read_btrace (struct btrace_target_info *tinfo, struct buffer *buffer,
>    buffer_grow_str (buffer, "</btrace>\n");
>  
>    VEC_free (btrace_block_s, btrace);
> +
> +  return 0;
>  }
>  #endif /* HAVE_LINUX_BTRACE */
>  
> diff --git a/gdb/gdbserver/server.c b/gdb/gdbserver/server.c
> index a172c98..c518f62 100644
> --- a/gdb/gdbserver/server.c
> +++ b/gdb/gdbserver/server.c
> @@ -1343,7 +1343,7 @@ handle_qxfer_btrace (const char *annex,
>  {
>    static struct buffer cache;
>    struct thread_info *thread;
> -  int type;
> +  int type, result;
>  
>    if (the_target->read_btrace == NULL || writebuf != NULL)
>      return -2;
> @@ -1375,6 +1375,8 @@ handle_qxfer_btrace (const char *annex,
>      type = btrace_read_all;
>    else if (strcmp (annex, "new") == 0)
>      type = btrace_read_new;
> +  else if (strcmp (annex, "delta") == 0)
> +    type = btrace_read_delta;
>    else
>      {
>        strcpy (own_buf, "E.Bad annex.");
> @@ -1385,7 +1387,12 @@ handle_qxfer_btrace (const char *annex,
>      {
>        buffer_free (&cache);
>  
> -      target_read_btrace (thread->btrace, &cache, type);
> +      result = target_read_btrace (thread->btrace, &cache, type);
> +      if (result != 0)
> +	{
> +	  memcpy (own_buf, cache.buffer, cache.used_size);

target_read_btrace used buffer_grow_str but here you expect it used
buffer_grow_str0.  So change one of them appropriately.


> +	  return -3;
> +	}
>      }
>    else if (offset > cache.used_size)
>      {
> diff --git a/gdb/gdbserver/target.h b/gdb/gdbserver/target.h
> index c57cb40..1bb1f23 100644
> --- a/gdb/gdbserver/target.h
> +++ b/gdb/gdbserver/target.h
> @@ -420,8 +420,10 @@ struct target_ops
>    int (*disable_btrace) (struct btrace_target_info *tinfo);
>  
>    /* Read branch trace data into buffer.  We use an int to specify the type
> -     to break a cyclic dependency.  */
> -  void (*read_btrace) (struct btrace_target_info *, struct buffer *, int type);
> +     to break a cyclic dependency.
> +     Return 0 on success; print an error message into BUFFER and return -1,
> +     otherwise.  */
> +  int (*read_btrace) (struct btrace_target_info *, struct buffer *, int type);
>  
>    /* Return true if target supports range stepping.  */
>    int (*supports_range_stepping) (void);
> diff --git a/gdb/remote.c b/gdb/remote.c
> index b352ca6..705aa66 100644
> --- a/gdb/remote.c
> +++ b/gdb/remote.c
> @@ -11417,13 +11417,14 @@ remote_teardown_btrace (struct btrace_target_info *tinfo)
>  
>  /* Read the branch trace.  */
>  
> -static VEC (btrace_block_s) *
> -remote_read_btrace (struct btrace_target_info *tinfo,
> +static int
> +remote_read_btrace (VEC (btrace_block_s) **btrace,
> +		    struct btrace_target_info *tinfo,
>  		    enum btrace_read_type type)
>  {
>    struct packet_config *packet = &remote_protocol_packets[PACKET_qXfer_btrace];
>    struct remote_state *rs = get_remote_state ();
> -  VEC (btrace_block_s) *btrace = NULL;
> +  struct cleanup *cleanup;
>    const char *annex;
>    char *xml;
>  
> @@ -11442,6 +11443,9 @@ remote_read_btrace (struct btrace_target_info *tinfo,
>      case btrace_read_new:
>        annex = "new";
>        break;
> +    case btrace_read_delta:
> +      annex = "delta";
> +      break;
>      default:
>        internal_error (__FILE__, __LINE__,
>  		      _("Bad branch tracing read type: %u."),
> @@ -11450,15 +11454,14 @@ remote_read_btrace (struct btrace_target_info *tinfo,
>  
>    xml = target_read_stralloc (&current_target,
>                                TARGET_OBJECT_BTRACE, annex);
> -  if (xml != NULL)
> -    {
> -      struct cleanup *cleanup = make_cleanup (xfree, xml);
> +  if (xml == NULL)
> +    return -1;
>  
> -      btrace = parse_xml_btrace (xml);
> -      do_cleanups (cleanup);
> -    }
> +  cleanup = make_cleanup (xfree, xml);
> +  *btrace = parse_xml_btrace (xml);
> +  do_cleanups (cleanup);
>  
> -  return btrace;
> +  return 0;
>  }
>  
>  static int
> diff --git a/gdb/target.c b/gdb/target.c
> index 58388f3..33f774e 100644
> --- a/gdb/target.c
> +++ b/gdb/target.c
> @@ -4237,18 +4237,19 @@ target_teardown_btrace (struct btrace_target_info *btinfo)
>  
>  /* See target.h.  */
>  
> -VEC (btrace_block_s) *
> -target_read_btrace (struct btrace_target_info *btinfo,
> +int
> +target_read_btrace (VEC (btrace_block_s) **btrace,
> +		    struct btrace_target_info *btinfo,
>  		    enum btrace_read_type type)
>  {
>    struct target_ops *t;
>  
>    for (t = current_target.beneath; t != NULL; t = t->beneath)
>      if (t->to_read_btrace != NULL)
> -      return t->to_read_btrace (btinfo, type);
> +      return t->to_read_btrace (btrace, btinfo, type);
>  
>    tcomplain ();
> -  return NULL;
> +  return ENOSYS;
>  }
>  
>  /* See target.h.  */
> diff --git a/gdb/target.h b/gdb/target.h
> index 632bf1d..4a20533 100644
> --- a/gdb/target.h
> +++ b/gdb/target.h
> @@ -882,9 +882,12 @@ struct target_ops
>         be attempting to talk to a remote target.  */
>      void (*to_teardown_btrace) (struct btrace_target_info *tinfo);
>  
> -    /* Read branch trace data.  */
> -    VEC (btrace_block_s) *(*to_read_btrace) (struct btrace_target_info *,
> -					     enum btrace_read_type);
> +    /* Read branch trace data into DATA.  The vector is cleared before any
> +       new data is added.
> +       Returns 0 on success; a negative error code, otherwise.  */

"a negative errno code" (error code seems too ambiguous to me)

But target_read_btrace several lines above returns positive errno code.

TBH returning all these errno codes are not common in GDB, returning -1 would
make it easier but I do not insist on it.


> +    int (*to_read_btrace) (VEC (btrace_block_s) **data,
> +			   struct btrace_target_info *,
> +			   enum btrace_read_type);
>  
>      /* Stop trace recording.  */
>      void (*to_stop_recording) (void);
> @@ -2010,8 +2013,9 @@ extern void target_disable_btrace (struct btrace_target_info *btinfo);
>  extern void target_teardown_btrace (struct btrace_target_info *btinfo);
>  
>  /* See to_read_btrace in struct target_ops.  */
> -extern VEC (btrace_block_s) *target_read_btrace (struct btrace_target_info *,
> -						 enum btrace_read_type);
> +extern int target_read_btrace (VEC (btrace_block_s) **,
> +			       struct btrace_target_info *,
> +			       enum btrace_read_type);
>  
>  /* See to_stop_recording in struct target_ops.  */
>  extern void target_stop_recording (void);
> -- 
> 1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 22/24] infrun: reverse stepping from unknown functions
  2013-07-03  9:14 ` [patch v4 22/24] infrun: reverse stepping from unknown functions Markus Metzger
@ 2013-08-18 19:09   ` Jan Kratochvil
  0 siblings, 0 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:09 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches

On Wed, 03 Jul 2013 11:14:32 +0200, Markus Metzger wrote:
> When reverse-stepping, only insert a resume breakpoint at ecs->stop_func_start
> if the function start is known.  Otherwise, keep single-stepping.

A testcase would be nice but I understand the fix is obvious so OK without
a testcase.


> 
> 2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>
> 
> 	* infrun.c (handle_inferior_event): Check if we know the function
> 	start address.
> 
> 
> ---
>  gdb/infrun.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/gdb/infrun.c b/gdb/infrun.c
> index dc1036d..bd44016 100644
> --- a/gdb/infrun.c
> +++ b/gdb/infrun.c
> @@ -4939,7 +4939,7 @@ process_event_stop_test:
>  		 or stepped back out of a signal handler to the first instruction
>  		 of the function.  Just keep going, which will single-step back
>  		 to the caller.  */
> -	      if (ecs->stop_func_start != stop_pc)
> +	      if (ecs->stop_func_start != stop_pc && ecs->stop_func_start != 0)
>  		{
>  		  struct symtab_and_line sr_sal;
>  
> -- 
> 1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 19/24] btrace, linux: fix memory leak when reading branch trace
  2013-07-03  9:14 ` [patch v4 19/24] btrace, linux: fix memory leak when reading branch trace Markus Metzger
@ 2013-08-18 19:09   ` Jan Kratochvil
  0 siblings, 0 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:09 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches

On Wed, 03 Jul 2013 11:14:29 +0200, Markus Metzger wrote:
> When it takes more than one iteration to read the BTS trace, the trace from the
> previous iteration is leaked.  Fix it.
> 
> 2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>
> 
> 	* common/linux-btrace.c (linux_read_btrace): Free trace from
> 	previous iteration.
> 
> 
> ---
>  gdb/common/linux-btrace.c |    3 +++
>  1 files changed, 3 insertions(+), 0 deletions(-)
> 
> diff --git a/gdb/common/linux-btrace.c b/gdb/common/linux-btrace.c
> index 4880f41..b30a6ec 100644
> --- a/gdb/common/linux-btrace.c
> +++ b/gdb/common/linux-btrace.c
> @@ -522,6 +522,9 @@ linux_read_btrace (struct btrace_target_info *tinfo,
>      {
>        data_head = header->data_head;
>  
> +      /* Delete any leftover trace from the previous iteration.  */
> +      VEC_truncate (btrace_block_s, btrace, 0);

This still leaks, you should do VEC_free.  Later you do:
	*btrace = perf_event_read_bts (tinfo, begin, end, start, size);

which overwrites the whole 'struct *' containing validly setup VEC data just
with zero elements.


> +
>        /* If there's new trace, let's read it.  */
>        if (data_head != tinfo->data_head)
>  	{
> -- 
> 1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 23/24] record-btrace: add (reverse-)stepping support
  2013-07-03  9:15 ` [patch v4 23/24] record-btrace: add (reverse-)stepping support Markus Metzger
@ 2013-08-18 19:09   ` Jan Kratochvil
  2013-09-17  9:43     ` Metzger, Markus T
  0 siblings, 1 reply; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:09 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches

[-- Attachment #1: Type: text/plain, Size: 6490 bytes --]

On Wed, 03 Jul 2013 11:14:33 +0200, Markus Metzger wrote:
> There's an open regarding frame unwinding.  When I start stepping, the frame
> cache will still be based on normal unwinding as will the frame cached in the
> thread's stepping context.  This will prevent me from detecting that i stepped
> into a subroutine.

Where do you detect you have stepped into a subroutine? That is up to GDB
after your to_wait returns, in handle_inferior_event.


> To overcome that, I'm resetting the frame cache and setting the thread's
> stepping cache based on the current frame - which is now computed using branch
> tracing unwind.  I had to split get_current_frame to avoid checks that would
> prevent me from doing this.

This is not correct, till to_wait finishes the inferior is still executing and
you cannot query its current state (such as its frame/pc/register).

I probably still miss why you do so.


Proposing some hacked draft patch but for some testcases it FAILs for me; but
they FAIL even without this patch as I run it on Nehalem.  I understand I may
miss some problem there, though.


> It looks like I don't need any special support for breakpoints.  Is there a
> scenario where normal breakpoints won't work?

You already handle it specially in BTHR_CONT and in BTHR_RCONT by
breakpoint_here_p.  As btrace does not record any data changes that may be
enough.  "record full" has different situation as it records data changes.
I think it is fine as you wrote it.

You could handle BTHR_CONT and BTHR_RCONT equally to BTHR_STEP and BTHR_RSTEP,
just returning TARGET_WAITKIND_SPURIOUS instead of TARGET_WAITKIND_STOPPED.
This way you would not need to handle specially breakpoint_here_p.
But it would be sure slower.


> Non-stop mode is not working.  Do not allow record-btrace in non-stop mode.

While that seems OK for the initial check-in I do not think it is convenient.
Some users use for example Eclipse in non-stop mode, they would not be able to
use btrace then as one cannot change non-stop state when the inferior is
running.  You can just disable the ALL_THREADS cases in record-btrace.c, can't
you?


This mail is not really reviewed yet as the design should be settled down
first.


> --- a/gdb/btrace.h
> +++ b/gdb/btrace.h
> @@ -149,6 +149,25 @@ struct btrace_call_history
>    struct btrace_call_iterator end;
>  };
>  
> +/* Branch trace thread flags.  */
> +enum btrace_thread_flag
> +  {

enum btrace_thread_flag
{


> +    /* The thread is to be stepped forwards.  */
> +    BTHR_STEP = (1 << 0),
> +
> +    /* The thread is to be stepped backwards.  */
> +    BTHR_RSTEP = (1 << 1),
> +
> +    /* The thread is to be continued forwards.  */
> +    BTHR_CONT = (1 << 2),
> +
> +    /* The thread is to be continued backwards.  */
> +    BTHR_RCONT = (1 << 3),
> +
> +    /* The thread is to be moved.  */
> +    BTHR_MOVE = (BTHR_STEP | BTHR_RSTEP | BTHR_CONT | BTHR_RCONT)
> +  };
> +
>  /* Branch trace information per thread.
>  
>     This represents the branch trace configuration as well as the entry point
> @@ -176,6 +195,9 @@ struct btrace_thread_info
>       becomes zero.  */
>    int level;
>  
> +  /* A bit-vector of btrace_thread_flag.  */
> +  unsigned int flags;

enum btrace_thread_flag
The values are then also properly displayed by GDB.


> +
>    /* The instruction history iterator.  */
>    struct btrace_insn_history *insn_history;
>  
[...]
> --- a/gdb/frame.c
> +++ b/gdb/frame.c
> @@ -1367,6 +1367,29 @@ unwind_to_current_frame (struct ui_out *ui_out, void *args)
>    return 0;
>  }
>  
> +/* See frame.h.  */
> +
> +struct frame_info *get_current_frame_nocheck (void)
> +{
> +  if (current_frame == NULL)
> +    {
> +      struct frame_info *sentinel_frame =
> +	create_sentinel_frame (current_program_space, get_current_regcache ());
> +
> +      if (catch_exceptions (current_uiout, unwind_to_current_frame,
> +			    sentinel_frame, RETURN_MASK_ERROR) != 0)
> +	{
> +	  /* Oops! Fake a current frame?  Is this useful?  It has a PC
> +             of zero, for instance.  */
> +	  current_frame = sentinel_frame;
> +	}
> +    }
> +
> +  return current_frame;
> +}
> +
> +/* See frame.h.  */
> +
>  struct frame_info *
>  get_current_frame (void)
>  {


> @@ -1381,6 +1404,7 @@ get_current_frame (void)
>      error (_("No stack."));
>    if (!target_has_memory)
>      error (_("No memory."));
> +
>    /* Traceframes are effectively a substitute for the live inferior.  */
>    if (get_traceframe_number () < 0)
>      {

Unrelated patch chunk.  But the get_current_frame() part of the patch should
be dropped anyway.


> @@ -1392,19 +1416,7 @@ get_current_frame (void)
>  	error (_("Target is executing."));
>      }
>  
> -  if (current_frame == NULL)
> -    {
> -      struct frame_info *sentinel_frame =
> -	create_sentinel_frame (current_program_space, get_current_regcache ());
> -      if (catch_exceptions (current_uiout, unwind_to_current_frame,
> -			    sentinel_frame, RETURN_MASK_ERROR) != 0)
> -	{
> -	  /* Oops! Fake a current frame?  Is this useful?  It has a PC
> -             of zero, for instance.  */
> -	  current_frame = sentinel_frame;
> -	}
> -    }
> -  return current_frame;
> +  return get_current_frame_nocheck ();
>  }
>  
>  /* The "selected" stack frame is used by default for local and arg
[...]
> +    case BTHR_CONT:
> +      /* We're done if we're not replaying.  */
> +      if (replay == NULL)
> +	return btrace_step_no_history ();
> +
> +      /* I'd much rather go from TP to its inferior, but how?  */

find_inferior_pid (ptid_get_pid (tp->ptid))
Although I do not see why you prefer the inferior here.


> +      aspace = current_inferior ()->aspace;
> +
> +      /* Determine the end of the instruction trace.  */
> +      btrace_insn_end (&end, btinfo);
> +
> +      for (;;)
> +	{
> +	  const struct btrace_insn *insn;
> +
> +	  /* We are always able to step at least once.  */
> +	  steps = btrace_insn_next (replay, 1);
> +	  gdb_assert (steps == 1);
> +
> +	  /* We stop replaying if we reached the end of the trace.  */
> +	  if (btrace_insn_cmp (replay, &end) == 0)
> +	    {
> +	      record_btrace_stop_replaying (btinfo);
> +	      return btrace_step_no_history ();
> +	    }
> +
> +	  insn = btrace_insn_get (replay);
> +	  gdb_assert (insn);
> +
> +	  DEBUG ("stepping %d (%s) ... %s", tp->num,
> +		 target_pid_to_str (tp->ptid),
> +		 core_addr_to_string_nz (insn->pc));
> +
> +	  if (breakpoint_here_p (aspace, insn->pc))
> +	    return btrace_step_stopped ();
> +	}
> +

[-- Attachment #2: btrace-towait.patch --]
[-- Type: text/plain, Size: 4297 bytes --]

diff --git a/gdb/btrace.h b/gdb/btrace.h
index 22fabb5..8eceec4 100644
--- a/gdb/btrace.h
+++ b/gdb/btrace.h
@@ -27,6 +27,7 @@
    list of sequential control-flow blocks, one such list per thread.  */
 
 #include "btrace-common.h"
+#include "target.h"
 
 struct thread_info;
 struct btrace_function;
@@ -198,6 +199,8 @@ struct btrace_thread_info
   /* A bit-vector of btrace_thread_flag.  */
   unsigned int flags;
 
+struct target_waitstatus status;
+
   /* The instruction history iterator.  */
   struct btrace_insn_history *insn_history;
 
diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
index 9feda30..633990a 100644
--- a/gdb/record-btrace.c
+++ b/gdb/record-btrace.c
@@ -1190,6 +1190,8 @@ static const struct frame_unwind record_btrace_frame_unwind =
   record_btrace_frame_dealloc_cache
 };
 
+static struct target_waitstatus record_btrace_step_thread (struct thread_info *tp);
+
 /* Indicate that TP should be resumed according to FLAG.  */
 
 static void
@@ -1209,6 +1211,10 @@ record_btrace_resume_thread (struct thread_info *tp,
   btrace_fetch (tp);
 
   btinfo->flags |= flag;
+
+
+/* We only move a single thread.  We're not able to correlate threads.  */
+btinfo->status = record_btrace_step_thread (tp);
 }
 
 /* Find the thread to resume given a PTID.  */
@@ -1248,6 +1254,7 @@ record_btrace_start_replaying (struct btrace_thread_info *btinfo)
   gdb_assert (btinfo->replay == NULL);
   btinfo->replay = replay;
 
+#if 0
   /* Make sure we're not using any stale registers or frames.  */
   registers_changed ();
   reinit_frame_cache ();
@@ -1258,6 +1265,7 @@ record_btrace_start_replaying (struct btrace_thread_info *btinfo)
   insn = btrace_insn_get (replay);
   sal = find_pc_line (insn->pc, 0);
   set_step_info (frame, sal);
+#endif
 
   return replay;
 }
@@ -1271,6 +1279,8 @@ record_btrace_stop_replaying (struct btrace_thread_info *btinfo)
   btinfo->replay = NULL;
 }
 
+static int forward_to_beneath;
+
 /* The to_resume method of target record-btrace.  */
 
 static void
@@ -1290,7 +1300,9 @@ record_btrace_resume (struct target_ops *ops, ptid_t ptid, int step,
       record_btrace_stop_replaying (&other->btrace);
 
   /* As long as we're not replaying, just forward the request.  */
-  if (!record_btrace_is_replaying () && execution_direction != EXEC_REVERSE)
+  forward_to_beneath = (!record_btrace_is_replaying ()
+                        && execution_direction != EXEC_REVERSE);
+  if (forward_to_beneath)
     {
       for (ops = ops->beneath; ops != NULL; ops = ops->beneath)
 	if (ops->to_resume != NULL)
@@ -1400,7 +1412,7 @@ record_btrace_step_thread (struct thread_info *tp)
   replay = btinfo->replay;
 
   flag = btinfo->flags & BTHR_MOVE;
-  btinfo->flags &= ~BTHR_MOVE;
+//  btinfo->flags &= ~BTHR_MOVE;
 
   DEBUG ("stepping %d (%s): %u", tp->num, target_pid_to_str (tp->ptid), flag);
 
@@ -1517,7 +1529,7 @@ record_btrace_wait (struct target_ops *ops, ptid_t ptid,
   DEBUG ("wait %s (0x%x)", target_pid_to_str (ptid), options);
 
   /* As long as we're not replaying, just forward the request.  */
-  if (!record_btrace_is_replaying () && execution_direction != EXEC_REVERSE)
+  if (forward_to_beneath)
     {
       for (ops = ops->beneath; ops != NULL; ops = ops->beneath)
 	if (ops->to_wait != NULL)
@@ -1536,8 +1548,11 @@ record_btrace_wait (struct target_ops *ops, ptid_t ptid,
       return minus_one_ptid;
     }
 
+#if 0
   /* We only move a single thread.  We're not able to correlate threads.  */
   *status = record_btrace_step_thread (tp);
+#endif
+*status=tp->btrace.status;
 
   /* Stop all other threads. */
   if (!non_stop)
@@ -1547,9 +1562,11 @@ record_btrace_wait (struct target_ops *ops, ptid_t ptid,
   /* Start record histories anew from the current position.  */
   record_btrace_clear_histories (&tp->btrace);
 
+#if 0
   /* GDB seems to need this.  Without, a stale PC seems to be used resulting in
      the current location to be displayed incorrectly.  */
   registers_changed ();
+#endif
 
   return tp->ptid;
 }
diff --git a/gdb/target.h b/gdb/target.h
index 4a20533..e85b063 100644
--- a/gdb/target.h
+++ b/gdb/target.h
@@ -62,7 +62,7 @@ struct expression;
 #include "memattr.h"
 #include "vec.h"
 #include "gdb_signals.h"
-#include "btrace.h"
+#include "btrace-common.h"
 #include "command.h"
 
 enum strata

^ permalink raw reply	[flat|nested] 88+ messages in thread

* instruction_history.exp unset variable  [Re: [patch v4 21/24] record-btrace: show trace from enable location]
  2013-07-03  9:15 ` [patch v4 21/24] record-btrace: show trace from enable location Markus Metzger
@ 2013-08-18 19:10   ` Jan Kratochvil
  2013-09-16 14:11     ` Metzger, Markus T
  2013-08-18 19:16   ` [patch v4 21/24] record-btrace: show trace from enable location Jan Kratochvil
  1 sibling, 1 reply; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:10 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches

On Wed, 03 Jul 2013 11:14:31 +0200, Markus Metzger wrote:
> --- a/gdb/testsuite/gdb.btrace/instruction_history.exp
> +++ b/gdb/testsuite/gdb.btrace/instruction_history.exp
> @@ -56,42 +56,42 @@ gdb_test_multiple "info record" $testname {
>      }
>  }
>  
> -# we have exactly 6 instructions here
> -set message "exactly 6 instructions"
> -if { $traced != 6 } {
> +# we have exactly 11 instructions here
> +set message "exactly 11 instructions"
> +if { $traced != 11 } {
>      fail $message
>  } else {
>      pass $message
>  }

Not related to this patch but here is a bug:

set testname "determine number of recorded instructions"
gdb_test_multiple "info record" $testname {
    -re "Active record target: record-btrace\r\nRecorded \(\[0-9\]*\) instructions in \(\[0-9\]*\) functions for thread 1 .*\\.\r\n$gdb_prompt $" {
        set traced $expect_out(1,string)
        set traced_functions $expect_out(2,string)
        pass $testname
    }
}

# we have exactly 11 instructions here
set message "exactly 11 instructions"
if { $traced != 11 } {
[...]

If the first test FAILs then the testcase aborts (aborting also other tests in
its group):

Running ./gdb.btrace/instruction_history.exp ...
FAIL: gdb.btrace/instruction_history.exp: record btrace
FAIL: gdb.btrace/instruction_history.exp: determine number of recorded instructions
ERROR: tcl error sourcing ./gdb.btrace/instruction_history.exp.
ERROR: can't read "traced": no such variable
    while executing
"if { $traced != 11 } {
    fail $message
} else {
    pass $message
}"
[...]


There should be some
	set traced ""
before gdb_test_multiple.
BTW $traced_functions is not used anywhere.


Jan

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 24/24] record-btrace: skip tail calls in back trace
  2013-07-03  9:14 ` [patch v4 24/24] record-btrace: skip tail calls in back trace Markus Metzger
@ 2013-08-18 19:10   ` Jan Kratochvil
  2013-09-17 14:28     ` Metzger, Markus T
  0 siblings, 1 reply; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:10 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches

On Wed, 03 Jul 2013 11:14:34 +0200, Markus Metzger wrote:
> The branch trace represents the caller/callee relationship of tail calls.  The
> caller of a tail call is shown in the back trace and in the function-call
> history.
> 
> This is not consistent with GDB's normal behavior, where the tail caller is not
> shown in the back trace.

This depends on the compiler and its options.  With recent GCCs and -O2 -g
compilation tail calls are shown.  They are even tested for (full) reverse
execution:
Running ./gdb.reverse/amd64-tailcall-reverse.exp ...
Running ./gdb.arch/amd64-tailcall-ret.exp ...
Running ./gdb.arch/amd64-tailcall-cxx.exp ...
Running ./gdb.arch/amd64-tailcall-noret.exp ...

In the -O0 -g mode they are not shown just because of the lack of debug info.
AFAIK it is too expensive for GCC to produce it while -O0 -g compilation
should be fast.

Surprisingly this gives in some cases -O2 -g compilation better debugging
experience than -O0 -g compilation.

Still the missing tailcalls in -O0 -g mode is a defect of the compiler and GDB
should not try to mimick it when it can provide better debugging output.

If you find the reverse execution should be really equal to the forward
execution you could suppress the tail calls only if symtab->call_site_htab is
NULL (and therefore the compiler did not provide tail calls info).


Still when I revert this GDB code patch then gdb.btrace/rn-dl-bind.exp does not
reverse-next properly - what is the reason?

reverse-next^M
__GI_____strtoul_l_internal (nptr=<unavailable>, endptr=<unavailable>, base=<optimized out>, group=<optimized out>, loc=<optimized out>) at ../stdlib/strtol_l.c:531^M
531     }^M
(gdb) FAIL: gdb.btrace/rn-dl-bind.exp: rn-dl-bind, 2.3
bt^M
#0  __GI_____strtoul_l_internal (nptr=<unavailable>, endptr=<unavailable>, base=<optimized out>, group=<optimized out>, loc=<optimized out>) at ../stdlib/strtol_l.c:531^M
#1  0x00007ffff7228f8d in __GI_strtoul (nptr=<error reading variable: Registers are not available in btrace record history>, endptr=<error reading variable: Registers are not available in btrace record history>, base=<error reading variable: Registers are not available in btrace record history>) at ../stdlib/strtol.c:108^M
#2  _dl_runtime_resolve () at ../sysdeps/x86_64/dl-trampoline.S:56^M
#3  0x00000000004004c6 in ?? ()^M
#4  0x00000000004004fb in strtoul@plt ()^M
#5  0x000000000040060c in test () at ./gdb.btrace/rn-dl-bind.c:26^M
#6  0x0000000000400621 in main () at ./gdb.btrace/rn-dl-bind.c:35^M
Backtrace stopped: not enough registers or memory available to unwind further^M



> It further causes the finish command to fail for tail calls.
> 
> This patch skips tail calls when computing the back trace during replay.  The
> finish command now works also for tail calls.
> 
> The tail caller is still shown in the function-call history.
> 
> I'm not sure which is the better behavior.  I liked seeing the tail caller in
> the call stack and I'm not using the finish command very often.  On the other
> hand, reverse/replay should be as close to live debugging as possible.
> 
> 2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>
> 
> 	* record-btrace.c (record_btrace_frame_sniffer): Skip tail calls.
> 
> testsuite/
> 	* gdb.btrace/tailcall.exp: Update.  Add stepping tests.
> 	* gdb.btrace/rn-dl-bind.c: New.
> 	* gdb.btrace/rn-dl-bind.exp: New.
> 
> 
> ---
>  gdb/record-btrace.c                     |   15 ++++++----
>  gdb/testsuite/gdb.btrace/rn-dl-bind.c   |   37 +++++++++++++++++++++++
>  gdb/testsuite/gdb.btrace/rn-dl-bind.exp |   48 +++++++++++++++++++++++++++++++
>  gdb/testsuite/gdb.btrace/tailcall.exp   |   25 +++++++++++++--
>  4 files changed, 115 insertions(+), 10 deletions(-)
>  create mode 100644 gdb/testsuite/gdb.btrace/rn-dl-bind.c
>  create mode 100644 gdb/testsuite/gdb.btrace/rn-dl-bind.exp
> 
> diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
> index b45a5fb..9feda30 100644
> --- a/gdb/record-btrace.c
> +++ b/gdb/record-btrace.c
> @@ -1026,7 +1026,7 @@ record_btrace_frame_this_id (struct frame_info *this_frame, void **this_cache,
>    cache = *this_cache;
>  
>    stack = 0;
> -  code = get_frame_func (this_frame);
> +  code = cache->pc;
>    special = (CORE_ADDR) cache->bfun;
>  
>    *this_id = frame_id_build_special (stack, code, special);
> @@ -1120,6 +1120,13 @@ record_btrace_frame_sniffer (const struct frame_unwind *self,
>    caller = bfun->up;
>    pc = 0;
>  
> +  /* Skip tail calls.  */
> +  while (caller != NULL && (bfun->flags & BFUN_UP_LINKS_TO_TAILCALL) != 0)
> +    {
> +      bfun = caller;
> +      caller = bfun->up;
> +    }
> +
>    /* Determine where to find the PC in the upper function segment.  */
>    if (caller != NULL)
>      {
> @@ -1133,11 +1140,7 @@ record_btrace_frame_sniffer (const struct frame_unwind *self,
>  	  insn = VEC_last (btrace_insn_s, caller->insn);
>  	  pc = insn->pc;
>  
> -	  /* We link directly to the jump instruction in the case of a tail
> -	     call, since the next instruction will likely be outside of the
> -	     caller function.  */
> -	  if ((bfun->flags & BFUN_UP_LINKS_TO_TAILCALL) == 0)
> -	    pc += gdb_insn_length (get_frame_arch (this_frame), pc);
> +	  pc += gdb_insn_length (get_frame_arch (this_frame), pc);
>  	}
>  
>        DEBUG ("[frame] sniffed frame for %s on level %d",
> diff --git a/gdb/testsuite/gdb.btrace/rn-dl-bind.c b/gdb/testsuite/gdb.btrace/rn-dl-bind.c
> new file mode 100644
> index 0000000..4930297
> --- /dev/null
> +++ b/gdb/testsuite/gdb.btrace/rn-dl-bind.c
> @@ -0,0 +1,37 @@
> +/* This testcase is part of GDB, the GNU debugger.
> +
> +   Copyright 2013 Free Software Foundation, Inc.
> +
> +   Contributed by Intel Corp. <markus.t.metzger@intel.com>
> +
> +   This program is free software; you can redistribute it and/or modify
> +   it under the terms of the GNU General Public License as published by
> +   the Free Software Foundation; either version 3 of the License, or
> +   (at your option) any later version.
> +
> +   This program is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +   GNU General Public License for more details.
> +
> +   You should have received a copy of the GNU General Public License
> +   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
> +
> +#include <stdlib.h>
> +
> +int test (void)
> +{
> +  int ret;
> +
> +  ret = strtoul ("42", NULL, 10);	/* test.1 */
> +  return ret;				/* test.2 */
> +}					/* test.3 */
> +
> +int
> +main (void)
> +{
> +  int ret;
> +
> +  ret = test ();			/* main.1 */
> +  return ret;				/* main.2 */
> +}					/* main.3 */
> diff --git a/gdb/testsuite/gdb.btrace/rn-dl-bind.exp b/gdb/testsuite/gdb.btrace/rn-dl-bind.exp
> new file mode 100644
> index 0000000..4d803f9
> --- /dev/null
> +++ b/gdb/testsuite/gdb.btrace/rn-dl-bind.exp
> @@ -0,0 +1,48 @@
> +# This testcase is part of GDB, the GNU debugger.
> +#
> +# Copyright 2013 Free Software Foundation, Inc.
> +#
> +# Contributed by Intel Corp. <markus.t.metzger@intel.com>
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +
> +# check for btrace support
> +if { [skip_btrace_tests] } { return -1 }
> +
> +# start inferior
> +standard_testfile
> +if [prepare_for_testing $testfile.exp $testfile $srcfile {c++ debug}] {
> +    return -1
> +}
> +if ![runto_main] {
> +    return -1
> +}
> +
> +# trace the code for the call to test
> +gdb_test_no_output "record btrace" "rn-dl-bind, 0.1"
> +gdb_test "next" ".*main\.2.*" "rn-dl-bind, 0.2"
> +
> +# just dump the function-call-history to help debugging
> +gdb_test_no_output "set record function-call-history-size 0" "rn-dl-bind, 0.3"
> +gdb_test "record function-call-history /cli 1" ".*" "rn-dl-bind, 0.4"
> +
> +# check that we can reverse-next and next
> +gdb_test "reverse-next" ".*main\.1.*" "rn-dl-bind, 1.1"
> +gdb_test "next" ".*main\.2.*" "rn-dl-bind, 1.2"
> +
> +# now go into test and try to reverse-next and next over the library call
> +gdb_test "reverse-step" ".*test\.3.*" "rn-dl-bind, 2.1"
> +gdb_test "reverse-step" ".*test\.2.*" "rn-dl-bind, 2.2"
> +gdb_test "reverse-next" ".*test\.1.*" "rn-dl-bind, 2.3"
> +gdb_test "next" ".*test\.2.*" "rn-dl-bind, 2.4"
> diff --git a/gdb/testsuite/gdb.btrace/tailcall.exp b/gdb/testsuite/gdb.btrace/tailcall.exp
> index 5cadee0..df8d66a 100644
> --- a/gdb/testsuite/gdb.btrace/tailcall.exp
> +++ b/gdb/testsuite/gdb.btrace/tailcall.exp
> @@ -57,12 +57,29 @@ gdb_test "record goto 4" "
>  # check the backtrace
>  gdb_test "backtrace" "
>  #0.*bar.*at .*x86-tailcall.c:24.*\r
> -#1.*foo.*at .*x86-tailcall.c:29.*\r
> -#2.*main.*at .*x86-tailcall.c:37.*\r
> +#1.*main.*at .*x86-tailcall.c:37.*\r
>  Backtrace stopped: not enough registers or memory available to unwind further" "backtrace in bar"

You should use \[^\r\n\]* instead of .* in all your testcases but it is
everywhere.  Normally it is not so serious so I did not require a change but
for example here it produces false positive as it will still incorrectly match:
	backtrace
	#0  0x00000000004005b5 in bar () at gdb/testsuite/gdb.btrace/x86-tailcall.c:24
	#1  foo () at gdb/testsuite/gdb.btrace/x86-tailcall.c:29
	#2  0x00000000004005d5 in main () at gdb/testsuite/gdb.btrace/x86-tailcall.c:37
	Backtrace stopped: not enough registers or memory available to unwind further
	(gdb) PASS: gdb.btrace/tailcall.exp: backtrace in bar

In fact it would be better to fix it wherever you can.


>  
>  # walk the backtrace
>  gdb_test "up" "
> -.*foo \\(\\) at .*x86-tailcall.c:29.*" "up to foo"
> -gdb_test "up" "
>  .*main \\(\\) at .*x86-tailcall.c:37.*" "up to main"
> +gdb_test "down" "
> +#0.*bar.*at .*x86-tailcall.c:24.*" "down to bar"
> +
> +# test stepping into and out of tailcalls.
> +gdb_test "finish" "
> +.*main.*at .*x86-tailcall.c:37.*" "step, 1.1"
> +gdb_test "reverse-step" "
> +.*bar.*at .*x86-tailcall.c:24.*" "step, 1.2"
> +gdb_test "reverse-finish" "
> +.*foo \\(\\) at .*x86-tailcall.c:29.*" "step, 1.3"
> +gdb_test "reverse-step" "
> +.*main.*at .*x86-tailcall.c:37.*" "step, 1.4"
> +gdb_test "next" "
> +.*main.*at .*x86-tailcall.c:39.*" "step, 1.5"
> +gdb_test "reverse-next" "
> +.*main.*at .*x86-tailcall.c:37.*" "step, 1.6"
> +gdb_test "step" "
> +.*foo \\(\\) at .*x86-tailcall.c:29.*" "step, 1.7"
> +gdb_test "finish" "
> +.*main.*at .*x86-tailcall.c:37.*" "step, 1.8"
> -- 
> 1.7.1

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 05/24] record-btrace: start counting at one
  2013-07-03  9:14 ` [patch v4 05/24] record-btrace: start counting at one Markus Metzger
@ 2013-08-18 19:11   ` Jan Kratochvil
  0 siblings, 0 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:11 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches

On Wed, 03 Jul 2013 11:14:15 +0200, Markus Metzger wrote:
> The record instruction-history and record-function-call-history commands start
> counting instructions at zero.  This is somewhat unintuitive when we start
> navigating in the recorded instruction history.  Start at one, instead.
> 
> 2013-07-03  Markus Metzger <markus.t.metzger@intel.com>
> 
>     * btrace.c (ftrace_new_function): Start counting at one.
> 
> testsuite/
>     * gdb.btrace/instruction_history.exp: Update.
>     * gdb.btrace/function_call_history.exp: Update.

This is OK.


Jan

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 02/24] record: upcase record_print_flag enumeration constants
  2013-07-03  9:14 ` [patch v4 02/24] record: upcase record_print_flag enumeration constants Markus Metzger
@ 2013-08-18 19:11   ` Jan Kratochvil
  0 siblings, 0 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:11 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches

On Wed, 03 Jul 2013 11:14:12 +0200, Markus Metzger wrote:
> 2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>
> 
> 	* record.h (record_print_flag) <record_print_src_line,
> 	record_print_insn_range>: Rename into ...
> 	(record_print_flag) <record_print_src_line,
> 	record_print_insn_range>: ... this.  Update all users.

This is OK.  Also applicable independently if it is applicable independently.


Thanks,
Jan

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 08/24] record-btrace: make ranges include begin and end
  2013-07-03  9:14 ` [patch v4 08/24] record-btrace: make ranges include begin and end Markus Metzger
@ 2013-08-18 19:12   ` Jan Kratochvil
  0 siblings, 0 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:12 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches, Christian Himpel

On Wed, 03 Jul 2013 11:14:18 +0200, Markus Metzger wrote:
> The "record function-call-history" and "record instruction-history" commands
> accept a range "begin, end".  End is not included in both cases.  Include it.
> 
> Reviewed-by: Eli Zaretskii  <eliz@gnu.org>
> CC: Christian Himpel  <christian.himpel@intel.com>
> 2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>
> 
> 	* record-btrace.c (record_btrace_insn_history_range): Include
> 	end.
> 	(record_btrace_insn_history_from): Adjust range.
> 	(record_btrace_call_history_range): Include
> 	end.
> 	(record_btrace_call_history_from): Adjust range.
> 
> testsuite/
> 	* gdb.btrace/function_call_history.exp: Update tests.
> 	* gdb.btrace/instruction_history.exp: Update tests.
> 
> doc/
> 	* gdb.texinfo (Process Record and Replay): Update documentation.

This is OK.


Thanks,
Jan

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 12/24] frame, backtrace: allow targets to supply a frame unwinder
  2013-07-03  9:15 ` [patch v4 12/24] frame, backtrace: allow targets to supply a frame unwinder Markus Metzger
@ 2013-08-18 19:14   ` Jan Kratochvil
  0 siblings, 0 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:14 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches

On Wed, 03 Jul 2013 11:14:22 +0200, Markus Metzger wrote:
> gdb/
> 2013-02-11  Jan Kratochvil  <jan.kratochvil@redhat.com>
> 
>         * dwarf2-frame.c (dwarf2_frame_cfa): Move UNWIND_UNAVAILABLE check
>         earlier.
>         * frame-unwind.c: Include target.h.
>         (frame_unwind_try_unwinder): New function with code from ...
>         (frame_unwind_find_by_frame): ... here.  New variable
>         unwinder_from_target, call also target_get_unwinder and
>         frame_unwind_try_unwinder for it.
>         * frame.c (get_frame_unwind_stop_reason): Unconditionally call
>         get_prev_frame_1.
>         * target.c (target_get_unwinder): New.
>         * target.h (struct target_ops): New field to_get_unwinder.
>         (target_get_unwinder): New declaration.

OK as review by Tom in February.
	Message-ID: <87halipmrg.fsf@fleche.redhat.com>


Jan

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 16/24] record-btrace: provide target_find_new_threads method
  2013-07-03  9:14 ` [patch v4 16/24] record-btrace: provide target_find_new_threads method Markus Metzger
@ 2013-08-18 19:15   ` Jan Kratochvil
  0 siblings, 0 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:15 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches

On Wed, 03 Jul 2013 11:14:26 +0200, Markus Metzger wrote:
> 2013-07-03  Markus Metzger <markus.t.metzger@intel.com>
> 
> 	* record-btrace.c (record_btrace_find_new_threads): New.
> 	(init_record_btrace_ops): Initialize to_find_new_threads.

OK for this patch.


Thanks,
Jan

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 21/24] record-btrace: show trace from enable location
  2013-07-03  9:15 ` [patch v4 21/24] record-btrace: show trace from enable location Markus Metzger
  2013-08-18 19:10   ` instruction_history.exp unset variable [Re: [patch v4 21/24] record-btrace: show trace from enable location] Jan Kratochvil
@ 2013-08-18 19:16   ` Jan Kratochvil
  1 sibling, 0 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-08-18 19:16 UTC (permalink / raw)
  To: Markus Metzger; +Cc: gdb-patches

On Wed, 03 Jul 2013 11:14:31 +0200, Markus Metzger wrote:
> 2013-07-03  Markus Metzger  <markus.t.metzger@intel.com>
> 
> 	* btrace.c: Include regcache.h.
> 	(btrace_add_pc): New.
> 	(btrace_enable): Call btrace_add_pc.
> 	(btrace_is_empty): New.
> 	(btrace_fetch): Return if replaying.
> 	* btrace.h (btrace_is_empty): New.
> 	* record-btrace.c (require_btrace, record_btrace_info): Call
> 	btrace_is_empty.
> 
> testsuite/
> 	* gdb.btrace/exception.exp: Update.
> 	* gdb.btrace/instruction_history.exp: Update.
> 	* gdb.btrace/record_goto.exp: Update.
> 	* gdb.btrace/tailcall.exp: Update.
> 	* gdb.btrace/unknown_functions.exp: Update.
> 	* gdb.btrace/delta.exp: New.

OK for this patch.


Thanks,
Jan

^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [patch v4 03/24] btrace: change branch trace data structure
  2013-08-18 19:05   ` Jan Kratochvil
@ 2013-09-10  9:11     ` Metzger, Markus T
  2013-09-12 20:09       ` Jan Kratochvil
  0 siblings, 1 reply; 88+ messages in thread
From: Metzger, Markus T @ 2013-09-10  9:11 UTC (permalink / raw)
  To: Jan Kratochvil; +Cc: gdb-patches, Himpel, Christian

> -----Original Message-----
> From: Jan Kratochvil [mailto:jan.kratochvil@redhat.com]
> Sent: Sunday, August 18, 2013 9:04 PM

Thanks for your review.


> > @@ -248,89 +185,477 @@ ftrace_skip_file (struct btrace_func *bfun, const
> char *filename)
> >    else
> >      bfile = "";
> >
> > -  if (filename == NULL)
> > -    filename = "";
> > +  if (fullname == NULL)
> > +    fullname = "";
> 
> The code should not assume FULLNAME cannot be "", "" is theoretically a
> valid
> source file filename.
> 
> Second reason is that currently no caller of ftrace_skip_file will pass NULL
> as the second parameter.
> 
> So the function can be just:
> 
>   if (sym == NULL)
>     return 1;
> 
>   bfile = symtab_to_fullname (sym->symtab);
> 
>   return filename_cmp (bfile, fullname) != 0;

Sounds good.  Changed it.


> And the function has only one caller so it would be IMO easier to read to get
> it inlined.

I'd rather keep it separate for documentation purposes.


> And not sure if it matters much but doing two symtab_to_fullname for
> comparison of two symtabs equality is needlessly expensive
> - symtab_to_fullname is very expensive.  There are several places in GDB
> doing first:
>           /* Before we invoke realpath, which can get expensive when many
>              files are involved, do a quick comparison of the basenames.  */
>           if (!basenames_may_differ
>               && filename_cmp (lbasename (symtab1->filename),
>                                lbasename (symtab2->filename)) != 0)
>             continue;

I would expect it does matter.  I noticed a slowdown when the trace gets
in the order of 1M instructions.  I have not done any profiling, yet, but I will
get back to this once I look into performance.


> > +static void
> > +ftrace_update_caller (struct btrace_function *bfun,
> > +		      struct btrace_function *caller,
> > +		      unsigned int flags)
> 
> FLAGS should be enum btrace_function_flag (it is ORed bitmask but GDB
> displays
> enum ORed bitmasks appropriately).

Changed it.  This will burn us when we want to switch to C++ someday.


> > +/* Fix up the caller for a function segment.  */
> 
> IIUC it should be:
> 
> /* Fix up the caller for all segments of a function call.  */

Thanks.  Yes, that's how it should be.


> > +	  /* We maintain levels for a series of returns for which we have
> > +	     not seen the calls, but we restart at level 0, otherwise.  */
> > +	  bfun->level = min (0, prev->level) - 1;
> 
> Why is there the 'min (0, ' part?

When we return from some tail call chain, for example, and we have
not traced the actual function call that started this chain.

I added a reference to tail calls in the comment.


> > -	  bfun = VEC_safe_push (btrace_func_s, ftrace, NULL);
> > +	  /* There is a call in PREV's back trace to which we should have
> > +	     returned.  Let's remain at this level.  */
> > +	  bfun->level = prev->level;
> 
> Shouldn't here be rather:
> 	  bfun->level = caller->level;

We did not return to this caller - otherwise, we would have found it before.

This is handling a case that should not normally occur.  No matter what we
do, the indentation will likely be off in one or the other case.


> > +	  /* If we have symbol information for our current location, use
> > +	     it to check that we jump to the start of a function.  */
> > +	  if (fun != NULL || mfun != NULL)
> > +	    start = get_pc_function_start (pc);
> > +	  else
> > +	    start = pc;
> 
> This goes into implementation detail of get_pc_function_start.  Rather
> always
> call get_pc_function_start but one should check if it failed in all cases
> (you do not check if get_pc_function_start failed).  get_pc_function_start
> returns 0 if it has failed.

The check is implicit since pc can't be zero.


> Or was the 'fun != NULL || mfun != NULL' check there for performance
> reasons?

That's for performance reasons.  No need to call the function if we know
it won't help us.


> > +static void
> > +btrace_compute_ftrace (struct btrace_thread_info *btinfo,
> > +		       VEC (btrace_block_s) *btrace)
> 
> When doing any non-trivial trace on buggy Nehalem (enabling btrace by a
> GDB
> patch) GDB locks up on "info record".  I found it is looping in this function
> with too big btrace range:
> (gdb) p *block
> $5 = {begin = 4777824, end = 9153192}
> 
> But one can break it easily with CTRL-C and hopefully on btrace-correct CPUs
> such things do not happen.

We should not normally get such trace.  I could add a simple heuristic that
blocks can't be bigger than x bytes but I don't think that's necessary.


> > +const struct btrace_insn *
> > +btrace_insn_get (const struct btrace_insn_iterator *it)
> > +{
> > +  const struct btrace_function *bfun;
> > +  unsigned int index, end;
> > +
> > +  if (it == NULL)
> > +    return NULL;
> 
> I do not see this style in GDB and IMO it can delay bug report from where it
> occured.  Either gdb_assert (it != NULL); or just to leave it crashing below.

OK.  It's just a habit.


> > +  index = it->index;
> > +  bfun = it->function;
> > +  if (bfun == NULL)
> > +    return NULL;
> 
> btrace_insn_iterator::function does not state if NULL is allowed and its
> meaning in such case.  btrace_call_get description states "NULL if the
> interator points past the end of the branch trace." but I do not see it could
> be set to NULL in any current code (expecting it was so in older code).
> btrace_insn_next returns the last instruction not the last+1 pointer.
> 
> IMO it should be stated btrace_insn_iterator::function can never be NULL
> and
> here should be either gdb_assert (bfun != NULL); or just nothing, like above.

Done.  That's indeed a leftover.  Thanks for pointing it out.


> > +
> > +  btinfo = it->btinfo;
> > +  if (btinfo == NULL)
> > +    return 0;
> 
> Similiarly btrace_call_iterator::btinfo does not state if it can be NULL and
> consequently here to code should rather gdb_assert it (or ignore it all).

OK.  I ignored it.


> > +  bfun = it->function;
> > +  if (bfun != NULL)
> > +    return bfun->number;
> 
> Similiarly btrace_call_iterator::function does not state if it can be NULL and
> consequently here to code should rather gdb_assert it (or ignore it all).

For the call iterator, function can be NULL (see e.g. btrace_call_end).

It is documented in btrace.h  (maybe I added this afterwards).


> > +
> > +  /* The branch trace function segment.
> > +     This will be NULL for the iterator pointing to the end of the trace.  */
> 
> btrace_call_next can return NULL in function while btrace_insn_next returns
> rather the very last of all instructions.  Is there a reason for this
> difference?

The end iterator points one past the last element.  For instructions, this is
one past the last instruction index in the last (non-empty) function segment.
For calls, this is a NULL function segment.


> >    uiout = current_uiout;
> >    uiout_cleanup = make_cleanup_ui_out_tuple_begin_end (uiout,
> >  						       "insn history");
> > -  btinfo = require_btrace ();
> > -  last = VEC_length (btrace_inst_s, btinfo->itrace);
> > +  low = (unsigned int) from;
> > +  high = (unsigned int) to;
> 
> I do not see a reason for this cast, it is even not signed vs. unsigned.

From and to are ULONGEST which may be 64bit whereas low and high
are 32bit.


> > -  if (end <= begin)
> > +  if (high <= low)
> >      error (_("Bad range."));
> 
> Function description says:
>     /* Disassemble a section of the recorded execution trace from instruction
>        BEGIN (inclusive) to instruction END (exclusive).  */
> 
> But it beahves as if END was inclusive.  Or I do not understand something?
> (gdb) record instruction-history 1925,1926
> 1925	   0x00007ffff62f6afc <memset+28>:	ja     0x7ffff62f6b30
> <memset+80>
> 1926	   0x00007ffff62f6afe <memset+30>:	cmp    $0x10,%rdx
> 
> If it should be inclusive then LOW == HIGH should be allowed:
> (gdb) record instruction-history 1925,1925
> Bad range.
> 
> Not in this patch (in some later one) but there is also:
>       /* We want both begin and end to be inclusive.  */
>       btrace_insn_next (&end, 1);
> 
> which contradicts the description of to_insn_history_range.

Initially, I had it exclusive, and this should be the behaviour if you just apply
this patch.  Later on, I changed it to be inclusive, instead, to better match
existing commands like list.

I fixed this behaviour in the respective patch, added a test, and updated
the comment in target.h.


> Unrelated to this patch but the function record_btrace_insn_history_from
> does
> not need to be virtualized.  It does not access any internals of
> record-btrace.c, it could be fully implemented in the superclass record.c and
> to_insn_history_from could be deleted.
> 
> The same applies for record_btrace_call_history_from and
> to_call_history_from.

Both depend on the numbering scheme, which is an implementation detail.
They both assume that counting starts at 0 (at 1 in a later patch).

This does not hold for record-full, where the lowest instruction may be
bigger than zero.



> > -  if (end <= begin)
> > +  if (high <= low)
> >      error (_("Bad range."));
> 
> Similar inclusive/exclusive question as in record_btrace_insn_history_range
> and
> to_insn_history_range.
> 
>       /* We want both begin and end to be inclusive.  */
>       btrace_call_next (&end, 1);
> 
> (gdb) record function-call-history 700,701
> 700	_dl_lookup_symbol_x
> 701	_dl_fixup
> (gdb) record function-call-history 700,700
> Bad range.

I fixed this behaviour in the respective patch, added a test, and updated
the comment in target.h.


Regards,
Markus.
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052

^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [patch v4 07/24] record-btrace: optionally indent function call history
  2013-08-18 19:06   ` Jan Kratochvil
@ 2013-09-10 13:06     ` Metzger, Markus T
  2013-09-10 13:08       ` Jan Kratochvil
  0 siblings, 1 reply; 88+ messages in thread
From: Metzger, Markus T @ 2013-09-10 13:06 UTC (permalink / raw)
  To: Jan Kratochvil; +Cc: gdb-patches, Himpel, Christian

> -----Original Message-----
> From: gdb-patches-owner@sourceware.org [mailto:gdb-patches-
> owner@sourceware.org] On Behalf Of Jan Kratochvil
> Sent: Sunday, August 18, 2013 9:06 PM

Thanks for your review.


> > +      else
> > +	ui_out_field_string (uiout, "function", "<unknown>");
> 
> Here should be _("<unknown>").  (BTW I do not know about any existing
> localized message catalogs for GDB.)
> 
> _() would be inappropriate for MI but in such case there should be IMO
> anyway
> rather:
> 
>   else if (!ui_out_is_mi_like_p (uiout))
>     ui_out_field_string (uiout, "function", _("<unknown>"));
> 
> But there is currently no MI interface setup for these commands (although
> you have nicely prepared the commands for MI) so I do not find it worth the
> time to discuss MI issues now.

I changed it like you proposed above with no output for MI.

What should we do with text output like "inst" and "at" below.

> > +	  ui_out_text (uiout, "\tinst ");

Would I split this to separate "inst" from the formatting?
Or is it OK to just say '_("\tinst ")'?


Thanks,
Markus.
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 07/24] record-btrace: optionally indent function call history
  2013-09-10 13:06     ` Metzger, Markus T
@ 2013-09-10 13:08       ` Jan Kratochvil
  0 siblings, 0 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-09-10 13:08 UTC (permalink / raw)
  To: Metzger, Markus T; +Cc: gdb-patches, Himpel, Christian

On Tue, 10 Sep 2013 15:06:00 +0200, Metzger, Markus T wrote:
> What should we do with text output like "inst" and "at" below.

You are right it should be also localized.


> > > +	  ui_out_text (uiout, "\tinst ");
> 
> Would I split this to separate "inst" from the formatting?
> Or is it OK to just say '_("\tinst ")'?

_("\tinst ") is OK.


Thanks,
Jan

^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [patch v4 09/24] btrace: add replay position to btrace thread info
  2013-08-18 19:07   ` Jan Kratochvil
@ 2013-09-10 13:24     ` Metzger, Markus T
  2013-09-12 20:19       ` Jan Kratochvil
  0 siblings, 1 reply; 88+ messages in thread
From: Metzger, Markus T @ 2013-09-10 13:24 UTC (permalink / raw)
  To: Jan Kratochvil; +Cc: gdb-patches

> -----Original Message-----
> From: Jan Kratochvil [mailto:jan.kratochvil@redhat.com]
> Sent: Sunday, August 18, 2013 9:07 PM

Thanks for your review.


> > +      if (size < 0)
> > +	{
> > +	  /* We want the current position covered, as well.  */
> > +	  covered = btrace_insn_next (&end, 1);
> > +	  covered += btrace_insn_prev (&begin, context - covered);
> > +	  covered += btrace_insn_next (&end, context - covered);
> > +	}
> > +      else
> > +	{
> > +	  covered = btrace_insn_next (&end, context);
> > +	  covered += btrace_insn_prev (&begin, context - covered);
> > +	}
> 
> These two COVERED calculations do not seem right to me, pointer is moving
> NEXT and PREV so the directions should be both added and subtracted.

context = abs (size).

Both iterator functions return the number of instructions they moved into
the respective direction.

Regards,
Markus.

Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 03/24] btrace: change branch trace data structure
  2013-09-10  9:11     ` Metzger, Markus T
@ 2013-09-12 20:09       ` Jan Kratochvil
  2013-09-16  9:01         ` Metzger, Markus T
  2013-09-22 16:57         ` Jan Kratochvil
  0 siblings, 2 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-09-12 20:09 UTC (permalink / raw)
  To: Metzger, Markus T; +Cc: gdb-patches, Himpel, Christian

On Tue, 10 Sep 2013 11:10:33 +0200, Metzger, Markus T wrote:
> > > +static void
> > > +ftrace_update_caller (struct btrace_function *bfun,
> > > +		      struct btrace_function *caller,
> > > +		      unsigned int flags)
> > 
> > FLAGS should be enum btrace_function_flag (it is ORed bitmask but GDB
> > displays
> > enum ORed bitmasks appropriately).
> 
> Changed it.  This will burn us when we want to switch to C++ someday.

I would prefer a wrapper class, so that Enum | Enum remains Enum.
OTOH wrapper classes may not be too easily understandable for contributors.
That would need an agreement first.

But that is off-topic here, so far enums types have been kept.


> > > +	  /* We maintain levels for a series of returns for which we have
> > > +	     not seen the calls, but we restart at level 0, otherwise.  */
> > > +	  bfun->level = min (0, prev->level) - 1;
> > 
> > Why is there the 'min (0, ' part?
> 
> When we return from some tail call chain, for example, and we have
> not traced the actual function call that started this chain.
> 
> I added a reference to tail calls in the comment.

I have found now the problem is:

struct btrace_function
  /* The function level in a back trace across the entire branch trace.
     A caller's level is one higher than the level of its callee.

     Levels can be negative if we see returns for which we have not seen
     the corresponding calls.  The branch trace thread information provides
     a fixup to normalize function levels so the smallest level is zero.  */
  int level;

should be:
-    A caller's level is one higher than the level of its callee.
+    A callee's level is one higher than the level of its caller.

as one can see for gdb.btrace/tailcall.exp:

record function-call-history /c 1^M
1       0main^M
2       1  foo^M
3       2    bar^M
4       0main^M
        ^

In such case please rename btrace_function->level to something else, such as
btrace_function->calls_level or btrace_function->reverse_level etc.
as it is the opposite of the related GDB frame_info->level field.


The 'min (0, ' then makes sense to me:
1       1  foo
2       2    bar
3       0main


> > > -	  bfun = VEC_safe_push (btrace_func_s, ftrace, NULL);
> > > +	  /* There is a call in PREV's back trace to which we should have
> > > +	     returned.  Let's remain at this level.  */
> > > +	  bfun->level = prev->level;
> > 
> > Shouldn't here be rather:
> > 	  bfun->level = caller->level;
> 
> We did not return to this caller - otherwise, we would have found it before.
> 
> This is handling a case that should not normally occur.  No matter what we
> do, the indentation will likely be off in one or the other case.

We know the most recent tail calls are done so it should be safe to subtract
them from the current level.

But I agree it should not happen so the current code also makes sense.


> > > +	  /* If we have symbol information for our current location, use
> > > +	     it to check that we jump to the start of a function.  */
> > > +	  if (fun != NULL || mfun != NULL)
> > > +	    start = get_pc_function_start (pc);
> > > +	  else
> > > +	    start = pc;
> > 
> > This goes into implementation detail of get_pc_function_start.  Rather
> > always
> > call get_pc_function_start but one should check if it failed in all cases
> > (you do not check if get_pc_function_start failed).  get_pc_function_start
> > returns 0 if it has failed.
> 
> The check is implicit since pc can't be zero.

PC can be zero on embedded platforms, _start commonly starts there.  It is
a bug of get_pc_function_start it is not compatible with it.

Newer implementation of get_pc_function_start may fail in some case even if
FUN or MFUN is not NULL.

The code is making needless assumptions about get_pc_function_start inners.


> > Or was the 'fun != NULL || mfun != NULL' check there for performance
> > reasons?
> 
> That's for performance reasons.  No need to call the function if we know
> it won't help us.

The idea was that for example GDB may introduce 3rd kind of symbols, besides
minimal symbols and full symbols.  At that moment get_pc_function_start could
work with the 3rd kind of symbol which the code as is would not call
get_pc_function_start at all.

The code is making needless assumptions about get_pc_function_start inners.


> > > +static void
> > > +btrace_compute_ftrace (struct btrace_thread_info *btinfo,
> > > +		       VEC (btrace_block_s) *btrace)
> > 
> > When doing any non-trivial trace on buggy Nehalem (enabling btrace by a
> > GDB
> > patch) GDB locks up on "info record".  I found it is looping in this function
> > with too big btrace range:
> > (gdb) p *block
> > $5 = {begin = 4777824, end = 9153192}
> > 
> > But one can break it easily with CTRL-C and hopefully on btrace-correct CPUs
> > such things do not happen.
> 
> We should not normally get such trace.  I could add a simple heuristic that
> blocks can't be bigger than x bytes but I don't think that's necessary.

OK; I agree such trace should not normally happen.


> > > +  bfun = it->function;
> > > +  if (bfun != NULL)
> > > +    return bfun->number;
> > 
> > Similiarly btrace_call_iterator::function does not state if it can be NULL and
> > consequently here to code should rather gdb_assert it (or ignore it all).
> 
> For the call iterator, function can be NULL (see e.g. btrace_call_end).
> 
> It is documented in btrace.h  (maybe I added this afterwards).

I see now, my mistake checking it all.


> > > +  /* The branch trace function segment.
> > > +     This will be NULL for the iterator pointing to the end of the trace.  */
> > 
> > btrace_call_next can return NULL in function while btrace_insn_next returns
> > rather the very last of all instructions.  Is there a reason for this
> > difference?
> 
> The end iterator points one past the last element.  For instructions, this is
> one past the last instruction index in the last (non-empty) function segment.
> For calls, this is a NULL function segment.

Thanks for the explanation, I see now there is no better solution.

Functions are given by pointer while instructions are given by their index.
Therefore "after the end" for function is NULL while "after the end" for
instructions can be last_index+1.


> > > +  low = (unsigned int) from;
> > > +  high = (unsigned int) to;
> > 
> > I do not see a reason for this cast, it is even not signed vs. unsigned.
> 
> >From and to are ULONGEST which may be 64bit whereas low and high
> are 32bit.

The '(unsigned int)' cast just does not have to be there.  If you want to
highlight the assignment does trim the type width you could write that rather
as a comment.


> > > -  if (end <= begin)
> > > +  if (high <= low)
> > >      error (_("Bad range."));
> > 
> > Function description says:
> >     /* Disassemble a section of the recorded execution trace from instruction
> >        BEGIN (inclusive) to instruction END (exclusive).  */
> > 
> > But it beahves as if END was inclusive.  Or I do not understand something?
> > (gdb) record instruction-history 1925,1926
> > 1925	   0x00007ffff62f6afc <memset+28>:	ja     0x7ffff62f6b30
> > <memset+80>
> > 1926	   0x00007ffff62f6afe <memset+30>:	cmp    $0x10,%rdx
> > 
> > If it should be inclusive then LOW == HIGH should be allowed:
> > (gdb) record instruction-history 1925,1925
> > Bad range.
> > 
> > Not in this patch (in some later one) but there is also:
> >       /* We want both begin and end to be inclusive.  */
> >       btrace_insn_next (&end, 1);
> > 
> > which contradicts the description of to_insn_history_range.
> 
> Initially, I had it exclusive, and this should be the behaviour if you just apply
> this patch.  Later on, I changed it to be inclusive, instead, to better match
> existing commands like list.
> 
> I fixed this behaviour in the respective patch, added a test, and updated
> the comment in target.h.

OK, please do not misuse patch series for chronological development.
Patch series splitting is there for separation of topic.


> > Unrelated to this patch but the function record_btrace_insn_history_from
> > does
> > not need to be virtualized.  It does not access any internals of
> > record-btrace.c, it could be fully implemented in the superclass record.c and
> > to_insn_history_from could be deleted.
> > 
> > The same applies for record_btrace_call_history_from and
> > to_call_history_from.
> 
> Both depend on the numbering scheme, which is an implementation detail.
> They both assume that counting starts at 0 (at 1 in a later patch).
> 
> This does not hold for record-full, where the lowest instruction may be
> bigger than zero.

OK, one reason is that currently there is no implementation of these
methods for record-full:
	(gdb) record instruction-history 
	You can't do that when your target is `record-full'

The second reason is that while record-full can drop old record, seeing only
the last window:
	(gdb) set record full insn-number-max 10
	(gdb) record
	(gdb) info record
	Active record target: record-full
	Record mode:
	Lowest recorded instruction number is 1587.
	Highest recorded instruction number is 1596.
	Log contains 10 instructions.
	Max logged instructions is 10.

btrace backend does not seem to support such sliding window (the kernel buffer
sliding is unrelated).  GDB still stores in its memory all the btrace records
and one cannot do anything like
	(gdb) set record btrace insn-number-max 10

May it be a problem for btrace practical use cases?  As I do not have CPU
capable of btrace I cannot say how long it will take before the btrace storage
may become too big (such as >100MB) for long-term running processes under GDB.

Still I believe the code for the methods like to_insn_history_from should be
common for all the backends as the user visible behavior should be the same.
And this common code should support arbitrary "Lowest recorded instruction
number" (which the btrace backend currently does not support).

But as it is only a future extension and to_insn_history_from & co. methods
are already checked in FSF GDB HEAD this discussion is off-topic for this
patchset and the method implementations can be removed and unified into common
functions after anyone implements "record instruction-history" for
"record-full" and after anyone implements
arbitrary "Lowest recorded instruction number" - window sliding - for the
btrace backend.


Thanks,
Jan

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 09/24] btrace: add replay position to btrace thread info
  2013-09-10 13:24     ` Metzger, Markus T
@ 2013-09-12 20:19       ` Jan Kratochvil
  0 siblings, 0 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-09-12 20:19 UTC (permalink / raw)
  To: Metzger, Markus T; +Cc: gdb-patches

On Tue, 10 Sep 2013 15:24:15 +0200, Metzger, Markus T wrote:
> > > +      if (size < 0)
> > > +	{
> > > +	  /* We want the current position covered, as well.  */
> > > +	  covered = btrace_insn_next (&end, 1);
> > > +	  covered += btrace_insn_prev (&begin, context - covered);
> > > +	  covered += btrace_insn_next (&end, context - covered);
> > > +	}
> > > +      else
> > > +	{
> > > +	  covered = btrace_insn_next (&end, context);
> > > +	  covered += btrace_insn_prev (&begin, context - covered);
> > > +	}
> > 
> > These two COVERED calculations do not seem right to me, pointer is moving
> > NEXT and PREV so the directions should be both added and subtracted.
> 
> context = abs (size).
> 
> Both iterator functions return the number of instructions they moved into
> the respective direction.

OK, I agree now; I missed &begin vs. &end probably, a bit too smart code.


Thanks,
Jan

^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [patch v4 03/24] btrace: change branch trace data structure
  2013-09-12 20:09       ` Jan Kratochvil
@ 2013-09-16  9:01         ` Metzger, Markus T
  2013-09-21 19:44           ` Jan Kratochvil
  2013-09-22 16:57         ` Jan Kratochvil
  1 sibling, 1 reply; 88+ messages in thread
From: Metzger, Markus T @ 2013-09-16  9:01 UTC (permalink / raw)
  To: Jan Kratochvil; +Cc: gdb-patches, Himpel, Christian

> -----Original Message-----
> From: gdb-patches-owner@sourceware.org [mailto:gdb-patches-
> owner@sourceware.org] On Behalf Of Jan Kratochvil


Thanks for your feedback.


> > > > +	  /* If we have symbol information for our current location, use
> > > > +	     it to check that we jump to the start of a function.  */
> > > > +	  if (fun != NULL || mfun != NULL)
> > > > +	    start = get_pc_function_start (pc);
> > > > +	  else
> > > > +	    start = pc;
> > >
> > > This goes into implementation detail of get_pc_function_start.
> > > Rather always call get_pc_function_start but one should check if it
> > > failed in all cases (you do not check if get_pc_function_start
> > > failed).  get_pc_function_start returns 0 if it has failed.
> >
> > The check is implicit since pc can't be zero.
> 
> PC can be zero on embedded platforms, _start commonly starts there.  It is a
> bug of get_pc_function_start it is not compatible with it.
> 
> Newer implementation of get_pc_function_start may fail in some case even
> if FUN or MFUN is not NULL.
> 
> The code is making needless assumptions about get_pc_function_start
> inners.
> 
> 
> > > Or was the 'fun != NULL || mfun != NULL' check there for performance
> > > reasons?
> >
> > That's for performance reasons.  No need to call the function if we
> > know it won't help us.
> 
> The idea was that for example GDB may introduce 3rd kind of symbols,
> besides minimal symbols and full symbols.  At that moment
> get_pc_function_start could work with the 3rd kind of symbol which the
> code as is would not call get_pc_function_start at all.
> 
> The code is making needless assumptions about get_pc_function_start
> inners.

I removed the symbol NULL check and instead check for a zero return of
get_pc_function_start (PC).  This still rules out zero as a valid PC value,
but that's the current error return value of get_pc_function_start.


> OK, please do not misuse patch series for chronological development.
> Patch series splitting is there for separation of topic.

Do you want me to squash the series into a single patch?


> > > Unrelated to this patch but the function
> > > record_btrace_insn_history_from does not need to be virtualized.  It
> > > does not access any internals of record-btrace.c, it could be fully
> > > implemented in the superclass record.c and to_insn_history_from
> > > could be deleted.
> > >
> > > The same applies for record_btrace_call_history_from and
> > > to_call_history_from.
> >
> > Both depend on the numbering scheme, which is an implementation detail.
> > They both assume that counting starts at 0 (at 1 in a later patch).
> >
> > This does not hold for record-full, where the lowest instruction may
> > be bigger than zero.
> 
> OK, one reason is that currently there is no implementation of these
> methods for record-full:
> 	(gdb) record instruction-history
> 	You can't do that when your target is `record-full'
> 
> The second reason is that while record-full can drop old record, seeing only
> the last window:
> 	(gdb) set record full insn-number-max 10
> 	(gdb) record
> 	(gdb) info record
> 	Active record target: record-full
> 	Record mode:
> 	Lowest recorded instruction number is 1587.
> 	Highest recorded instruction number is 1596.
> 	Log contains 10 instructions.
> 	Max logged instructions is 10.
> 
> btrace backend does not seem to support such sliding window (the kernel
> buffer sliding is unrelated).  GDB still stores in its memory all the btrace
> records and one cannot do anything like
> 	(gdb) set record btrace insn-number-max 10

It's inherent in btrace.  We only ever see the tail of the trace.  We extend the
recorded trace when the kernel buffer does not overflow between updates.
Otherwise, we discard the trace in GDB and start anew with the current tail.


> Still I believe the code for the methods like to_insn_history_from should be
> common for all the backends as the user visible behavior should be the
> same.
> And this common code should support arbitrary "Lowest recorded instruction
> number" (which the btrace backend currently does not support).

The lowest recorded instruction is always zero for record-btrace.

If we added target methods to query for the lowest and highest instruction
number, we could implement the logic in record.c.  I didn't see any benefit
in that, so I didn't do it.  We will end up with about the same number of
target methods either way.


Regards,
Markus.
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052

^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [patch v4 11/24] record-btrace: supply register target methods
  2013-08-18 19:07   ` Jan Kratochvil
@ 2013-09-16  9:19     ` Metzger, Markus T
  2013-09-22 13:55       ` Jan Kratochvil
  0 siblings, 1 reply; 88+ messages in thread
From: Metzger, Markus T @ 2013-09-16  9:19 UTC (permalink / raw)
  To: Jan Kratochvil; +Cc: gdb-patches

> -----Original Message-----
> From: gdb-patches-owner@sourceware.org [mailto:gdb-patches-
> owner@sourceware.org] On Behalf Of Jan Kratochvil


> > +/* The to_store_registers method of target record-btrace.  */
> > +
> > +static void
> > +record_btrace_store_registers (struct target_ops *ops,
> > +			       struct regcache *regcache, int regno) {
> > +  struct target_ops *t;
> > +
> > +  if (record_btrace_is_replaying ())
> > +    return;
> 
> Currently I get:
> 	(gdb) p $rax
> 	$1 = <unavailable>
> 	(gdb) p $rax=1
> 	$2 = <unavailable>
> 
> I would find more appropriate an error() here so that we get:
> 	(gdb) p $rax
> 	$1 = <unavailable>
> 	(gdb) p $rax=1
> 	Some error message.
> 
> With gdbserver trace one gets:
> 	(gdb) print globalc
> 	$1 = <unavailable>
> 	(gdb) print globalc=1
> 	Cannot access memory at address 0x602120 which is not so
> convenient as it comes from gdbserver E01 response:
> gdb_write_memory -> if (current_traceframe >= 0) return EIO; as I checked.

OK.  I added an error message for the to_store_registers method.


> > +
> > +  if (may_write_registers == 0)
> > +    error (_("Writing to registers is not allowed (regno %d)"),
> > + regno);
> 
> Here should be rather:
>   gdb_assert (may_write_registers == 0);
> 
> as target_store_registers() would not pass the call here otherwise.

I took this from target_store_registers in target.c.


Regards,
Markus.
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052

^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [patch v4 14/24] record-btrace: provide xfer_partial target method
  2013-08-18 19:08   ` Jan Kratochvil
@ 2013-09-16  9:30     ` Metzger, Markus T
  2013-09-22 14:18       ` Jan Kratochvil
  0 siblings, 1 reply; 88+ messages in thread
From: Metzger, Markus T @ 2013-09-16  9:30 UTC (permalink / raw)
  To: Jan Kratochvil; +Cc: gdb-patches

> -----Original Message-----
> From: Jan Kratochvil [mailto:jan.kratochvil@redhat.com]
> Sent: Sunday, August 18, 2013 9:08 PM

Thanks for your review.


> > +static LONGEST
> > +record_btrace_xfer_partial (struct target_ops *ops, enum target_object
> object,
> > +			    const char *annex, gdb_byte *readbuf,
> > +			    const gdb_byte *writebuf, ULONGEST offset,
> > +			    LONGEST len)
> > +{
> > +  struct target_ops *t;
> > +
> > +  /* Normalize the request so len is positive.  */  if (len < 0)
> > +    {
> > +      offset += len;
> > +      len = - len;
> > +    }
> 
> I do not see LEN could be < 0, do you?  Use just:
>   gdb_assetr (len >= 0);
> (It even should never be LEN == 0 but that may not be guaranteed.)

Hmm, why didn't we use ULONGEST, then?

It looks like all implementations in target.c assume LEN to be positive without
checking.  I'm doing the same.

Regards,
Markus.
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052

^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [patch v4 18/24] record-btrace: extend unwinder
  2013-08-18 19:08   ` Jan Kratochvil
@ 2013-09-16 11:21     ` Metzger, Markus T
  2013-09-27 13:55       ` Jan Kratochvil
  0 siblings, 1 reply; 88+ messages in thread
From: Metzger, Markus T @ 2013-09-16 11:21 UTC (permalink / raw)
  To: Jan Kratochvil; +Cc: gdb-patches

> -----Original Message-----
> From: Jan Kratochvil [mailto:jan.kratochvil@redhat.com]
> Sent: Sunday, August 18, 2013 9:09 PM


> > An assertion in get_frame_id at frame.c:340 requires that a frame
> > provides a stack address.  The record-btrace unwinder can't provide
> > this since the trace does not contain data.  I incorrectly set
> > stack_addr_p to 1 to avoid the assertion.
> 
> Primarily record-btrace can provide the stack address.  You know $sp at the
> end of the recoding and you can query .eh_frame/.debug_frame at any PC
> address what is the difference between $sp and caller's $sp at that exact PC.
> This assumes either all the involved binaries were built with -fasynchronous-
> unwind-tables (for .eh_frame) or that debug info (for .debug_frame) is
> present.  The former is true in Fedora / Red Hat distros, unaware how others.

This would only hold for functions that have not yet returned to their caller.
If we go back far enough, the branch trace will also contain functions that
have already returned to their caller for which we do not have any information.
I would even argue that this is the majority of functions in the branch trace.


> The current method of constant STACK_ADDR may have some problems with
> frame_id_inner() but I did not investigate it more.

By looking at the code, frame_id_inner () should always fail since all btrace
frames have stack_addr == 0.

On the other hand, frame_id_inner is only called for frames of type
NORMAL_FRAME, whereas btrace frames have type BTRACE_FRAME.


> > When evaluating arguments for printing the stack back trace, there's
> > an ugly error displayed: "error reading variable: can't compute CFA for this
> frame".
> > The error is correct, we can't compute the CFA since we don't have the
> > stack at that time, but it is rather annoying at this place and makes
> > the back trace difficult to read.

This has meanwhile been resolved.  This had been a side-effect of throwing
an error in to_fetch_registers.  When I just return, function arguments are
correctly displayed as unavailable and the "can't compute CFA for this frame"
message is gone.


> > +  if (bfun == NULL)
> > +    return "<none>";
> 
> _("<none>")

I replaced this with ??.


> > +
> > +  msym = bfun->msym;
> > +  sym = bfun->sym;
> > +
> > +  if (sym != NULL)
> > +    return SYMBOL_PRINT_NAME (sym);
> > +  else if (msym != NULL)
> > +    return SYMBOL_PRINT_NAME (msym);
> > +  else
> > +    return "<unknown>";
> 
> _("<unknown>")

I replaced this with ??.


Regards,
Markus.

Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052

^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [patch v4 20/24] btrace, gdbserver: read branch trace incrementally
  2013-08-18 19:09   ` Jan Kratochvil
@ 2013-09-16 12:48     ` Metzger, Markus T
  2013-09-22 14:42       ` Jan Kratochvil
  0 siblings, 1 reply; 88+ messages in thread
From: Metzger, Markus T @ 2013-09-16 12:48 UTC (permalink / raw)
  To: Jan Kratochvil; +Cc: gdb-patches, Pedro Alves

> -----Original Message-----
> From: Jan Kratochvil [mailto:jan.kratochvil@redhat.com]
> Sent: Sunday, August 18, 2013 9:09 PM

Thanks for your review.


> > -VEC (btrace_block_s) *
> > -linux_read_btrace (struct btrace_target_info *tinfo,
> > +int
> > +linux_read_btrace (VEC (btrace_block_s) **btrace,
> > +		   struct btrace_target_info *tinfo,
> >  		   enum btrace_read_type type)
> >  {
> > -  return NULL;
> > +  return ENOSYS;
> 
> You return -EOVERFLOW in its real implementation while ENOSYS here, its
> sign does not match (+it is not documented).  linux_low_read_btrace checks
> for -EOVERFLOW.

The -EOVERFLOW return signals a buffer overflow which indicates that
delta trace is not available.  GDB then switches to a full read after discarding
the existing trace.

The -ENOSYS return signals that the feature is not available.  This error is
passed on to the user.



> > -    /* Read branch trace data.  */
> > -    VEC (btrace_block_s) *(*to_read_btrace) (struct btrace_target_info *,
> > -					     enum btrace_read_type);
> > +    /* Read branch trace data into DATA.  The vector is cleared before any
> > +       new data is added.
> > +       Returns 0 on success; a negative error code, otherwise.  */
> 
> "a negative errno code" (error code seems too ambiguous to me)
> 
> But target_read_btrace several lines above returns positive errno code.

That was a bug.  Fixed.


> TBH returning all these errno codes are not common in GDB, returning -1
> would make it easier but I do not insist on it.

I need to distinguish different types of errors, e.g. overflow and not supported.


Regards,
Markus.
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052

^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: instruction_history.exp unset variable  [Re: [patch v4 21/24] record-btrace: show trace from enable location]
  2013-08-18 19:10   ` instruction_history.exp unset variable [Re: [patch v4 21/24] record-btrace: show trace from enable location] Jan Kratochvil
@ 2013-09-16 14:11     ` Metzger, Markus T
  0 siblings, 0 replies; 88+ messages in thread
From: Metzger, Markus T @ 2013-09-16 14:11 UTC (permalink / raw)
  To: Jan Kratochvil; +Cc: gdb-patches

> -----Original Message-----
> From: gdb-patches-owner@sourceware.org [mailto:gdb-patches-
> owner@sourceware.org] On Behalf Of Jan Kratochvil

> Not related to this patch but here is a bug:
> 
[...]
> There should be some
> 	set traced ""
> before gdb_test_multiple.
> BTW $traced_functions is not used anywhere.

Thanks.  Fixed.  And removed traced_functions.

Regards,
Markus.
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052

^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [patch v4 23/24] record-btrace: add (reverse-)stepping support
  2013-08-18 19:09   ` Jan Kratochvil
@ 2013-09-17  9:43     ` Metzger, Markus T
  2013-09-29 17:24       ` Jan Kratochvil
  0 siblings, 1 reply; 88+ messages in thread
From: Metzger, Markus T @ 2013-09-17  9:43 UTC (permalink / raw)
  To: Jan Kratochvil; +Cc: gdb-patches

> -----Original Message-----
> From: Jan Kratochvil [mailto:jan.kratochvil@redhat.com]
> Sent: Sunday, August 18, 2013 9:10 PM
> To: Metzger, Markus T

Thanks for your review.


> > There's an open regarding frame unwinding.  When I start stepping, the
> > frame cache will still be based on normal unwinding as will the frame
> > cached in the thread's stepping context.  This will prevent me from
> > detecting that i stepped into a subroutine.
> 
> Where do you detect you have stepped into a subroutine? That is up to GDB
> after your to_wait returns, in handle_inferior_event.

That's the place.  I don't have any code that detects this.

But this code compares a NORMAL_FRAME from before the step with a
BTRACE_FRAME from after the wait.  They will always be unequal hence
we will never recognize that we just reverse-stepped into a function.

When I reset the frame cache, GDB re-computes the stored frame and now
compares two BTRACE_FRAMEs, which works OK.


> > To overcome that, I'm resetting the frame cache and setting the
> > thread's stepping cache based on the current frame - which is now
> > computed using branch tracing unwind.  I had to split
> > get_current_frame to avoid checks that would prevent me from doing this.
> 
> This is not correct, till to_wait finishes the inferior is still executing and you
> cannot query its current state (such as its frame/pc/register).
> 
> I probably still miss why you do so.

See above.  Alternatively, I might add a special case to frame comparison,
but this would be quite ugly, as well.  Do you have a better idea?


> Proposing some hacked draft patch but for some testcases it FAILs for me;
> but they FAIL even without this patch as I run it on Nehalem.  I understand I
> may miss some problem there, though.
> 
> 
> > It looks like I don't need any special support for breakpoints.  Is
> > there a scenario where normal breakpoints won't work?
> 
> You already handle it specially in BTHR_CONT and in BTHR_RCONT by
> breakpoint_here_p.  As btrace does not record any data changes that may
> be enough.  "record full" has different situation as it records data changes.
> I think it is fine as you wrote it.
> 
> You could handle BTHR_CONT and BTHR_RCONT equally to BTHR_STEP and
> BTHR_RSTEP, just returning TARGET_WAITKIND_SPURIOUS instead of
> TARGET_WAITKIND_STOPPED.
> This way you would not need to handle specially breakpoint_here_p.
> But it would be sure slower.

I don't think performance is an issue, here.  I tried that and it didn't seem
to stop correctly resulting in lots of test fails.  I have not investigated it.


> > Non-stop mode is not working.  Do not allow record-btrace in non-stop
> mode.
> 
> While that seems OK for the initial check-in I do not think it is convenient.
> Some users use for example Eclipse in non-stop mode, they would not be
> able to use btrace then as one cannot change non-stop state when the
> inferior is running.  You can just disable the ALL_THREADS cases in record-
> btrace.c, can't you?

Record-full is not supporting non-stop, either.  I'm wondering what other
issues I might run into with non-stop mode that I am currently not aware of.


> > +    case BTHR_CONT:
> > +      /* We're done if we're not replaying.  */
> > +      if (replay == NULL)
> > +	return btrace_step_no_history ();
> > +
> > +      /* I'd much rather go from TP to its inferior, but how?  */
> 
> find_inferior_pid (ptid_get_pid (tp->ptid)) Although I do not see why you
> prefer the inferior here.

I need the address space which is stored in the inferior struct.


Regards,
Markus.
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052

^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [patch v4 24/24] record-btrace: skip tail calls in back trace
  2013-08-18 19:10   ` Jan Kratochvil
@ 2013-09-17 14:28     ` Metzger, Markus T
  2013-09-18  8:28       ` Metzger, Markus T
  0 siblings, 1 reply; 88+ messages in thread
From: Metzger, Markus T @ 2013-09-17 14:28 UTC (permalink / raw)
  To: Jan Kratochvil; +Cc: gdb-patches

> -----Original Message-----
> From: gdb-patches-owner@sourceware.org [mailto:gdb-patches-
> owner@sourceware.org] On Behalf Of Jan Kratochvil


> > The branch trace represents the caller/callee relationship of tail
> > calls.  The caller of a tail call is shown in the back trace and in
> > the function-call history.
> >
> > This is not consistent with GDB's normal behavior, where the tail
> > caller is not shown in the back trace.
> 
> This depends on the compiler and its options.  With recent GCCs and -O2 -g
> compilation tail calls are shown.  They are even tested for (full) reverse
> execution:
> Running ./gdb.reverse/amd64-tailcall-reverse.exp ...
> Running ./gdb.arch/amd64-tailcall-ret.exp ...
> Running ./gdb.arch/amd64-tailcall-cxx.exp ...
> Running ./gdb.arch/amd64-tailcall-noret.exp ...
> 
> In the -O0 -g mode they are not shown just because of the lack of debug
> info.
> AFAIK it is too expensive for GCC to produce it while -O0 -g compilation
> should be fast.
> 
> Surprisingly this gives in some cases -O2 -g compilation better debugging
> experience than -O0 -g compilation.

From this perspective, this would actually be a feature that we have tail
calls available also in the call stack for reverse/replay even if we did not
have them for live debugging due to limited debug information.


> Still when I revert this GDB code patch then gdb.btrace/rn-dl-bind.exp does
> not reverse-next properly - what is the reason?
> 
> reverse-next^M
> __GI_____strtoul_l_internal (nptr=<unavailable>, endptr=<unavailable>,
> base=<optimized out>, group=<optimized out>, loc=<optimized out>) at
> ../stdlib/strtol_l.c:531^M
> 531     }^M
> (gdb) FAIL: gdb.btrace/rn-dl-bind.exp: rn-dl-bind, 2.3 bt^M
> #0  __GI_____strtoul_l_internal (nptr=<unavailable>, endptr=<unavailable>,
> base=<optimized out>, group=<optimized out>, loc=<optimized out>) at
> ../stdlib/strtol_l.c:531^M
> #1  0x00007ffff7228f8d in __GI_strtoul (nptr=<error reading variable:
> Registers are not available in btrace record history>, endptr=<error reading
> variable: Registers are not available in btrace record history>, base=<error
> reading variable: Registers are not available in btrace record history>) at
> ../stdlib/strtol.c:108^M
> #2  _dl_runtime_resolve () at ../sysdeps/x86_64/dl-trampoline.S:56^M
> #3  0x00000000004004c6 in ?? ()^M
> #4  0x00000000004004fb in strtoul@plt ()^M
> #5  0x000000000040060c in test () at ./gdb.btrace/rn-dl-bind.c:26^M
> #6  0x0000000000400621 in main () at ./gdb.btrace/rn-dl-bind.c:35^M
> Backtrace stopped: not enough registers or memory available to unwind
> further^M

I need to investigate this.

At some point, get_frame_func () returns 0, which is then used for the
code in the BTRACE_FRAME id.  This doesn't look OK at first glance.


> > It further causes the finish command to fail for tail calls.
> >
> > This patch skips tail calls when computing the back trace during
> > replay.  The finish command now works also for tail calls.

There were also some fails around finish.  I did not investigate those
after I realized that stepping behaves differently for live debugging
and replay.  The fails went away once I skipped tail calls.


Regards,
Markus.

Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052

^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [patch v4 24/24] record-btrace: skip tail calls in back trace
  2013-09-17 14:28     ` Metzger, Markus T
@ 2013-09-18  8:28       ` Metzger, Markus T
  2013-09-18  9:52         ` Metzger, Markus T
  0 siblings, 1 reply; 88+ messages in thread
From: Metzger, Markus T @ 2013-09-18  8:28 UTC (permalink / raw)
  To: Jan Kratochvil; +Cc: gdb-patches

> -----Original Message-----
> From: Metzger, Markus T
> Sent: Tuesday, September 17, 2013 4:28 PM


> > reverse-next^M
> > __GI_____strtoul_l_internal (nptr=<unavailable>, endptr=<unavailable>,
> > base=<optimized out>, group=<optimized out>, loc=<optimized out>) at
> > ../stdlib/strtol_l.c:531^M
> > 531     }^M
> > (gdb) FAIL: gdb.btrace/rn-dl-bind.exp: rn-dl-bind, 2.3 bt^M
> > #0  __GI_____strtoul_l_internal (nptr=<unavailable>,
> > endptr=<unavailable>, base=<optimized out>, group=<optimized out>,
> > loc=<optimized out>) at ../stdlib/strtol_l.c:531^M
> > #1  0x00007ffff7228f8d in __GI_strtoul (nptr=<error reading variable:
> > Registers are not available in btrace record history>, endptr=<error
> > reading
> > variable: Registers are not available in btrace record history>,
> > base=<error reading variable: Registers are not available in btrace
> > record history>) at ../stdlib/strtol.c:108^M
> > #2  _dl_runtime_resolve () at ../sysdeps/x86_64/dl-trampoline.S:56^M
> > #3  0x00000000004004c6 in ?? ()^M
> > #4  0x00000000004004fb in strtoul@plt ()^M
> > #5  0x000000000040060c in test () at ./gdb.btrace/rn-dl-bind.c:26^M
> > #6  0x0000000000400621 in main () at ./gdb.btrace/rn-dl-bind.c:35^M
> > Backtrace stopped: not enough registers or memory available to unwind
> > further^M
> 
> I need to investigate this.

If we skip tail calls, GDB recognizes that we reverse-stepped into a
subroutine and keeps stepping.

If we don't skip tail calls, GDB fails to recognize this and stops stepping
due to the absence of line information.

The line information is also missing when we skip tail calls, but the stepped-
into-subroutine check comes before the has-line-info check.

When searching for the caller frame id in infrun.c, GDB skips artificial frames
including normal TAILCALL_FRAMEs.  I guess this is why it works for live
stepping and also for record-full.

One way to solve this would be to add a BTRACE_TAILCALL_FRAME and
extend struct target_ops to provide two optional unwinders that are both
tried before any arch unwinder.  I'd try this unless you have a better idea.

Regards,
Markus.

Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052

^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [patch v4 24/24] record-btrace: skip tail calls in back trace
  2013-09-18  8:28       ` Metzger, Markus T
@ 2013-09-18  9:52         ` Metzger, Markus T
  0 siblings, 0 replies; 88+ messages in thread
From: Metzger, Markus T @ 2013-09-18  9:52 UTC (permalink / raw)
  To: Jan Kratochvil; +Cc: gdb-patches

> -----Original Message-----
> From: Metzger, Markus T
> Sent: Wednesday, September 18, 2013 10:28 AM


> One way to solve this would be to add a BTRACE_TAILCALL_FRAME and
> extend struct target_ops to provide two optional unwinders that are both
> tried before any arch unwinder.  I'd try this unless you have a better idea.

This fixes all fails in the existing tests.

There's one problem found by a new test I added:  finish from a tail-called
function will not stop stepping.

I'll first incorporate the above fix into the patch series and send an updated
version out for review later today.  Then I'll look at the finish-from-tailcall
problem.

Regards,
Markus.
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 03/24] btrace: change branch trace data structure
  2013-09-16  9:01         ` Metzger, Markus T
@ 2013-09-21 19:44           ` Jan Kratochvil
  2013-09-23  6:54             ` Metzger, Markus T
  0 siblings, 1 reply; 88+ messages in thread
From: Jan Kratochvil @ 2013-09-21 19:44 UTC (permalink / raw)
  To: Metzger, Markus T; +Cc: gdb-patches, Himpel, Christian

On Mon, 16 Sep 2013 10:59:38 +0200, Metzger, Markus T wrote:
> > The code is making needless assumptions about get_pc_function_start
> > inners.
> 
> I removed the symbol NULL check and instead check for a zero return of
> get_pc_function_start (PC).  This still rules out zero as a valid PC value,
> but that's the current error return value of get_pc_function_start.

OK.


> > OK, please do not misuse patch series for chronological development.
> > Patch series splitting is there for separation of topic.
> 
> Do you want me to squash the series into a single patch?

Definitely not.  Not that it is too important but:

I meant that
	[patch v4 08/24] record-btrace: make ranges include begin and end
could be merged into
	[patch v4 03/24] btrace: change branch trace data structure
as the patch #08 modifies only new code from patch #03, without any
incremental additions; just changing exclusive range implemented by #03 to
an inclusive range.

I understand one can easily overlook it on such a big series or I missed some
other reason for the separate #08 patch.


> > > > Unrelated to this patch but the function
> > > > record_btrace_insn_history_from does not need to be virtualized.  It
> > > > does not access any internals of record-btrace.c, it could be fully
> > > > implemented in the superclass record.c and to_insn_history_from
> > > > could be deleted.
> > > >
> > > > The same applies for record_btrace_call_history_from and
> > > > to_call_history_from.
> > >
> > > Both depend on the numbering scheme, which is an implementation detail.
> > > They both assume that counting starts at 0 (at 1 in a later patch).
> > >
> > > This does not hold for record-full, where the lowest instruction may
> > > be bigger than zero.
> > 
> > OK, one reason is that currently there is no implementation of these
> > methods for record-full:
> > 	(gdb) record instruction-history
> > 	You can't do that when your target is `record-full'
> > 
> > The second reason is that while record-full can drop old record, seeing only
> > the last window:
> > 	(gdb) set record full insn-number-max 10
> > 	(gdb) record
> > 	(gdb) info record
> > 	Active record target: record-full
> > 	Record mode:
> > 	Lowest recorded instruction number is 1587.
> > 	Highest recorded instruction number is 1596.
> > 	Log contains 10 instructions.
> > 	Max logged instructions is 10.
> > 
> > btrace backend does not seem to support such sliding window (the kernel
> > buffer sliding is unrelated).  GDB still stores in its memory all the btrace
> > records and one cannot do anything like
> > 	(gdb) set record btrace insn-number-max 10
> 
> It's inherent in btrace.  We only ever see the tail of the trace.  We extend the
> recorded trace when the kernel buffer does not overflow between updates.
> Otherwise, we discard the trace in GDB and start anew with the current tail.

(gdb) set record full insn-number-max 3
(gdb) record 
(gdb) stepi
(gdb) stepi
(gdb) stepi
(gdb) info record 
Active record target: record-full
Record mode:
Lowest recorded instruction number is 1.
Highest recorded instruction number is 3.
Log contains 3 instructions.
Max logged instructions is 3.
(gdb) stepi
Do you want to auto delete previous execution log entries when record/replay buffer becomes full (record full stop-at-limit)?([y] or n) y
(gdb) info record 
Active record target: record-full
Record mode:
Lowest recorded instruction number is 2.
Highest recorded instruction number is 4.
Log contains 3 instructions.
Max logged instructions is 3.

While 'record full' stores only the tail of selected size 'record btrace'
stores everything and one has to occasionally 'record stop' and 'record btrace'
again otherwise GDB runs out of memory.  At least this is what I expect for
long-term running inferiors, I do not have Haswell available to verify it.

With btrace one cannot select the tail size (there is nothing like
'set record btrace insn-number-max 3'), perf_event_buffer_size() is
auto-detected, 4MB max.

I try to explain the numbering ranges X-Y (and not just 1-Y) should apply also
to record btrace, not just to record full.  btrace also needs to drop very old
records and it is inconvenient for users to renumber the events all the time.

This also implies that the functions/instructions numbering style of both
btrace and full should be the same, and therefore those function should be
common in record.c and the vectorization to_call_history_from is not needed
then.


> > Still I believe the code for the methods like to_insn_history_from should be
> > common for all the backends as the user visible behavior should be the
> > same.
> > And this common code should support arbitrary "Lowest recorded instruction
> > number" (which the btrace backend currently does not support).
> 
> The lowest recorded instruction is always zero for record-btrace.

Which may cause GDB memory many-MB overflow if one traces long-running
inferior, I guess.


> If we added target methods to query for the lowest and highest instruction
> number, we could implement the logic in record.c.  I didn't see any benefit
> in that, so I didn't do it.  We will end up with about the same number of
> target methods either way.

Maybe the number of methods will be the same but it seems more logical to me
that the numbering/windowing should be common for all the backends.


Thanks,
Jan

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 11/24] record-btrace: supply register target methods
  2013-09-16  9:19     ` Metzger, Markus T
@ 2013-09-22 13:55       ` Jan Kratochvil
  2013-09-23  6:55         ` Metzger, Markus T
  0 siblings, 1 reply; 88+ messages in thread
From: Jan Kratochvil @ 2013-09-22 13:55 UTC (permalink / raw)
  To: Metzger, Markus T; +Cc: gdb-patches

On Mon, 16 Sep 2013 11:18:02 +0200, Metzger, Markus T wrote:
> OK.  I added an error message for the to_store_registers method.

OK:

(gdb) p $rax=1
This record target does not allow writing registers.


> > > +
> > > +  if (may_write_registers == 0)
> > > +    error (_("Writing to registers is not allowed (regno %d)"),
> > > + regno);
> > 
> > Here should be rather:
> >   gdb_assert (may_write_registers == 0);
> > 
> > as target_store_registers() would not pass the call here otherwise.
> 
> I took this from target_store_registers in target.c.

But that is a different case.  The case 'may_write_registers == 0' is always
already caught by target_store_registers().  record_btrace_store_registers()
gets called only by target_store_registers() which already verified the
variable is not zero.


Thanks,
Jan

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 14/24] record-btrace: provide xfer_partial target method
  2013-09-16  9:30     ` Metzger, Markus T
@ 2013-09-22 14:18       ` Jan Kratochvil
  0 siblings, 0 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-09-22 14:18 UTC (permalink / raw)
  To: Metzger, Markus T; +Cc: gdb-patches

On Mon, 16 Sep 2013 11:30:48 +0200, Metzger, Markus T wrote:
> > > +static LONGEST
> > > +record_btrace_xfer_partial (struct target_ops *ops, enum target_object
> > object,
> > > +			    const char *annex, gdb_byte *readbuf,
> > > +			    const gdb_byte *writebuf, ULONGEST offset,
> > > +			    LONGEST len)
> > > +{
> > > +  struct target_ops *t;
> > > +
> > > +  /* Normalize the request so len is positive.  */  if (len < 0)
> > > +    {
> > > +      offset += len;
> > > +      len = - len;
> > > +    }
> > 
> > I do not see LEN could be < 0, do you?  Use just:
> >   gdb_assetr (len >= 0);
> > (It even should never be LEN == 0 but that may not be guaranteed.)
> 
> Hmm, why didn't we use ULONGEST, then?

Nobody says the current GDB codebase / API is perfect.
Feel free to submit a patch changing len LONGEST->ULONGEST.


> It looks like all implementations in target.c assume LEN to be positive without
> checking.  I'm doing the same.

I see you have just removed the "normalization" so btrace code is now like
other GDB code, that is also fine.


Thanks,
Jan

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 20/24] btrace, gdbserver: read branch trace incrementally
  2013-09-16 12:48     ` Metzger, Markus T
@ 2013-09-22 14:42       ` Jan Kratochvil
  2013-09-23  7:09         ` Metzger, Markus T
  0 siblings, 1 reply; 88+ messages in thread
From: Jan Kratochvil @ 2013-09-22 14:42 UTC (permalink / raw)
  To: Metzger, Markus T; +Cc: gdb-patches, Pedro Alves

On Mon, 16 Sep 2013 14:48:42 +0200, Metzger, Markus T wrote:
> The -EOVERFLOW return signals a buffer overflow which indicates that
> delta trace is not available.  GDB then switches to a full read after discarding
> the existing trace.

Then linux_read_btrace function comment should document this as -EOVERFLOW is
specific to its API.  But I would find some enum more clear, see below.


> The -ENOSYS return signals that the feature is not available.  This error is
> passed on to the user.
+
> > TBH returning all these errno codes are not common in GDB, returning -1
> > would make it easier but I do not insist on it.
> 
> I need to distinguish different types of errors, e.g. overflow and not supported.

Then use enum.  There is for example:
enum return_reason
  {
    /* User interrupt.  */
    RETURN_QUIT = -2,
    /* Any other error.  */
    RETURN_ERROR
  };

One could even throw and catch specific exceptions (enum errors) but I find
that needlessly overcomplicated when we just return to the immediate caller.


Thanks,
Jan

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 03/24] btrace: change branch trace data structure
  2013-09-12 20:09       ` Jan Kratochvil
  2013-09-16  9:01         ` Metzger, Markus T
@ 2013-09-22 16:57         ` Jan Kratochvil
  2013-09-22 17:16           ` Jan Kratochvil
  1 sibling, 1 reply; 88+ messages in thread
From: Jan Kratochvil @ 2013-09-22 16:57 UTC (permalink / raw)
  To: Metzger, Markus T; +Cc: gdb-patches, Himpel, Christian

On Thu, 12 Sep 2013 22:09:27 +0200, Jan Kratochvil wrote:
> struct btrace_function
>   /* The function level in a back trace across the entire branch trace.
>      A caller's level is one higher than the level of its callee.
> 
>      Levels can be negative if we see returns for which we have not seen
>      the corresponding calls.  The branch trace thread information provides
>      a fixup to normalize function levels so the smallest level is zero.  */
>   int level;
> 
> should be:
> -    A caller's level is one higher than the level of its callee.
> +    A callee's level is one higher than the level of its caller.
> 
> as one can see for gdb.btrace/tailcall.exp:
> 
> record function-call-history /c 1^M
> 1       0main^M
> 2       1  foo^M
> 3       2    bar^M
> 4       0main^M
>         ^
> 
> In such case please rename btrace_function->level to something else, such as
> btrace_function->calls_level or btrace_function->reverse_level etc.
> as it is the opposite of the related GDB frame_info->level field.


This part of my mail got somehow lost, I do not see your reply mentioning and
I also do not see any change for btrace_function.level in the patch series v6.


Regards,
Jan

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 03/24] btrace: change branch trace data structure
  2013-09-22 16:57         ` Jan Kratochvil
@ 2013-09-22 17:16           ` Jan Kratochvil
  0 siblings, 0 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-09-22 17:16 UTC (permalink / raw)
  To: Metzger, Markus T; +Cc: gdb-patches, Himpel, Christian

Hi Markus,

please disregard this mail of mine, you have fixed it by:

   /* The function level in a back trace across the entire branch trace.
-     A caller's level is one higher than the level of its callee.
+     A caller's level is one lower than the level of its callee.


Jan


On Sun, 22 Sep 2013 18:57:20 +0200, Jan Kratochvil wrote:
> On Thu, 12 Sep 2013 22:09:27 +0200, Jan Kratochvil wrote:
> > struct btrace_function
> >   /* The function level in a back trace across the entire branch trace.
> >      A caller's level is one higher than the level of its callee.
> > 
> >      Levels can be negative if we see returns for which we have not seen
> >      the corresponding calls.  The branch trace thread information provides
> >      a fixup to normalize function levels so the smallest level is zero.  */
> >   int level;
> > 
> > should be:
> > -    A caller's level is one higher than the level of its callee.
> > +    A callee's level is one higher than the level of its caller.
> > 
> > as one can see for gdb.btrace/tailcall.exp:
> > 
> > record function-call-history /c 1^M
> > 1       0main^M
> > 2       1  foo^M
> > 3       2    bar^M
> > 4       0main^M
> >         ^
> > 
> > In such case please rename btrace_function->level to something else, such as
> > btrace_function->calls_level or btrace_function->reverse_level etc.
> > as it is the opposite of the related GDB frame_info->level field.
> 
> 
> This part of my mail got somehow lost, I do not see your reply mentioning and
> I also do not see any change for btrace_function.level in the patch series v6.
> 
> 
> Regards,
> Jan

^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [patch v4 03/24] btrace: change branch trace data structure
  2013-09-21 19:44           ` Jan Kratochvil
@ 2013-09-23  6:54             ` Metzger, Markus T
  2013-09-23  7:15               ` Jan Kratochvil
  0 siblings, 1 reply; 88+ messages in thread
From: Metzger, Markus T @ 2013-09-23  6:54 UTC (permalink / raw)
  To: Jan Kratochvil; +Cc: gdb-patches, Himpel, Christian

> -----Original Message-----
> From: Jan Kratochvil [mailto:jan.kratochvil@redhat.com]
> Sent: Saturday, September 21, 2013 9:44 PM


> > > > > Unrelated to this patch but the function
> > > > > record_btrace_insn_history_from does not need to be virtualized.  It
> > > > > does not access any internals of record-btrace.c, it could be fully
> > > > > implemented in the superclass record.c and to_insn_history_from
> > > > > could be deleted.
> > > > >
> > > > > The same applies for record_btrace_call_history_from and
> > > > > to_call_history_from.
> > > >
> > > > Both depend on the numbering scheme, which is an implementation
> detail.
> > > > They both assume that counting starts at 0 (at 1 in a later patch).
> > > >
> > > > This does not hold for record-full, where the lowest instruction may
> > > > be bigger than zero.
> > >
> > > OK, one reason is that currently there is no implementation of these
> > > methods for record-full:
> > > 	(gdb) record instruction-history
> > > 	You can't do that when your target is `record-full'
> > >
> > > The second reason is that while record-full can drop old record, seeing
> only
> > > the last window:
> > > 	(gdb) set record full insn-number-max 10
> > > 	(gdb) record
> > > 	(gdb) info record
> > > 	Active record target: record-full
> > > 	Record mode:
> > > 	Lowest recorded instruction number is 1587.
> > > 	Highest recorded instruction number is 1596.
> > > 	Log contains 10 instructions.
> > > 	Max logged instructions is 10.
> > >
> > > btrace backend does not seem to support such sliding window (the
> kernel
> > > buffer sliding is unrelated).  GDB still stores in its memory all the btrace
> > > records and one cannot do anything like
> > > 	(gdb) set record btrace insn-number-max 10
> >
> > It's inherent in btrace.  We only ever see the tail of the trace.  We extend
> the
> > recorded trace when the kernel buffer does not overflow between
> updates.
> > Otherwise, we discard the trace in GDB and start anew with the current tail.
> 
> (gdb) set record full insn-number-max 3
> (gdb) record
> (gdb) stepi
> (gdb) stepi
> (gdb) stepi
> (gdb) info record
> Active record target: record-full
> Record mode:
> Lowest recorded instruction number is 1.
> Highest recorded instruction number is 3.
> Log contains 3 instructions.
> Max logged instructions is 3.
> (gdb) stepi
> Do you want to auto delete previous execution log entries when
> record/replay buffer becomes full (record full stop-at-limit)?([y] or n) y
> (gdb) info record
> Active record target: record-full
> Record mode:
> Lowest recorded instruction number is 2.
> Highest recorded instruction number is 4.
> Log contains 3 instructions.
> Max logged instructions is 3.
> 
> While 'record full' stores only the tail of selected size 'record btrace'
> stores everything and one has to occasionally 'record stop' and 'record
> btrace'
> again otherwise GDB runs out of memory.  At least this is what I expect for
> long-term running inferiors, I do not have Haswell available to verify it.

When you trace a long running inferior with record btrace, you will only
get the tail of the trace - independent of how long you let it run.

The trace is collected in a cyclic buffer by the h/w.  When the inferior
stops, GDB reads that buffer which corresponds to the tail of the
inferior's execution trace.

That is, GDB first tries to read the delta trace and try to stitch it to
the old trace from the previous read.  This only works as long as the
cpu buffer does not overflow.  In practice, this should work for
single-stepping with the occasional next over relatively short functions.

So if you keep on single-stepping for a very long time, you may indeed
exhaust GDB's memory.  I don't think that this is a real issue in practice,
though.

As soon as you next over a big function or continue the inferior,
the h/w buffer will very likely overflow and you will again get the
tail of the trace.  GDB will implicitly discard the old trace.  It doesn't
know what happened between the old and the new trace.


> With btrace one cannot select the tail size (there is nothing like
> 'set record btrace insn-number-max 3'), perf_event_buffer_size() is
> auto-detected, 4MB max.

That's correct.  I can't really come up with a good reason why you would
 want less trace.  Maybe if you had a real big number of threads or were
really just interested in the last handful of branches.


> I try to explain the numbering ranges X-Y (and not just 1-Y) should apply also
> to record btrace, not just to record full.  btrace also needs to drop very old
> records and it is inconvenient for users to renumber the events all the time.

It's impossible for btrace to keep track of the number of instructions that have
been executed, so far, without increasing the overhead tremendously.  GDB
would need to stop the debuggee before the trace buffer runs full and then
process the trace and either store it or discard it.  This means very frequent
interrupts plus the time it takes for downloading and processing the trace.


> > The lowest recorded instruction is always zero for record-btrace.
> 
> Which may cause GDB memory many-MB overflow if one traces long-running
> inferior, I guess.

No.  It means that GDB renumbers the instructions each time the trace
buffer overflows.  So zero (later changed to one) is always the first
instruction that is available in the recorded execution history.


Regards,
Markus.

Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052

^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [patch v4 11/24] record-btrace: supply register target methods
  2013-09-22 13:55       ` Jan Kratochvil
@ 2013-09-23  6:55         ` Metzger, Markus T
  0 siblings, 0 replies; 88+ messages in thread
From: Metzger, Markus T @ 2013-09-23  6:55 UTC (permalink / raw)
  To: Jan Kratochvil; +Cc: gdb-patches

> -----Original Message-----
> From: gdb-patches-owner@sourceware.org [mailto:gdb-patches-
> owner@sourceware.org] On Behalf Of Jan Kratochvil


> > > Here should be rather:
> > >   gdb_assert (may_write_registers == 0);
> > >
> > > as target_store_registers() would not pass the call here otherwise.
> >
> > I took this from target_store_registers in target.c.
> 
> But that is a different case.  The case 'may_write_registers == 0' is always
> already caught by target_store_registers().  record_btrace_store_registers()
> gets called only by target_store_registers() which already verified the
> variable is not zero.

OK.  Changed it.

Regards,
Markus.
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052

^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [patch v4 20/24] btrace, gdbserver: read branch trace incrementally
  2013-09-22 14:42       ` Jan Kratochvil
@ 2013-09-23  7:09         ` Metzger, Markus T
  2013-09-25 19:05           ` Jan Kratochvil
  0 siblings, 1 reply; 88+ messages in thread
From: Metzger, Markus T @ 2013-09-23  7:09 UTC (permalink / raw)
  To: Jan Kratochvil; +Cc: gdb-patches, Pedro Alves

> -----Original Message-----
> From: Jan Kratochvil [mailto:jan.kratochvil@redhat.com]
> Sent: Sunday, September 22, 2013 4:42 PM


> > I need to distinguish different types of errors, e.g. overflow and not
> supported.
> 
> Then use enum.  There is for example:
> enum return_reason
>   {
>     /* User interrupt.  */
>     RETURN_QUIT = -2,
>     /* Any other error.  */
>     RETURN_ERROR
>   };
> 
> One could even throw and catch specific exceptions (enum errors) but I find
> that needlessly overcomplicated when we just return to the immediate
> caller.

In addition to errors I defined myself, I might get errors from the system call,
e.g. ENOMEM, EOPNOTSUPP, ENOSYS.  For the not-available function, for
example, I'm just mimicking the error that would be returned by the system
call on systems where that call is not available.

Do you want me to translate those into an enum?

Markus.

Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 03/24] btrace: change branch trace data structure
  2013-09-23  6:54             ` Metzger, Markus T
@ 2013-09-23  7:15               ` Jan Kratochvil
  2013-09-23  7:27                 ` Metzger, Markus T
  0 siblings, 1 reply; 88+ messages in thread
From: Jan Kratochvil @ 2013-09-23  7:15 UTC (permalink / raw)
  To: Metzger, Markus T; +Cc: gdb-patches, Himpel, Christian

On Mon, 23 Sep 2013 08:54:01 +0200, Metzger, Markus T wrote:
> The trace is collected in a cyclic buffer by the h/w.  When the inferior
> stops, GDB reads that buffer which corresponds to the tail of the
> inferior's execution trace.

I somehow expected kernel SIGTRAPs the process when the buffer overflows so
that the buffer could be read in.

OK, GDB code now makes more sense to me, just it should be more described in
the manual, I have made such comment in [patch v6 21/21] after I reply it.


Thanks,
Jan

^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [patch v4 03/24] btrace: change branch trace data structure
  2013-09-23  7:15               ` Jan Kratochvil
@ 2013-09-23  7:27                 ` Metzger, Markus T
  0 siblings, 0 replies; 88+ messages in thread
From: Metzger, Markus T @ 2013-09-23  7:27 UTC (permalink / raw)
  To: Jan Kratochvil; +Cc: gdb-patches, Himpel, Christian

> -----Original Message-----
> From: gdb-patches-owner@sourceware.org [mailto:gdb-patches-
> owner@sourceware.org] On Behalf Of Jan Kratochvil


> > The trace is collected in a cyclic buffer by the h/w.  When the inferior
> > stops, GDB reads that buffer which corresponds to the tail of the
> > inferior's execution trace.
> 
> I somehow expected kernel SIGTRAPs the process when the buffer
> overflows so
> that the buffer could be read in.
> 
> OK, GDB code now makes more sense to me, just it should be more
> described in
> the manual, I have made such comment in [patch v6 21/21] after I reply it.

I'll wait for your reply.

Thanks,
Markus.
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 20/24] btrace, gdbserver: read branch trace incrementally
  2013-09-23  7:09         ` Metzger, Markus T
@ 2013-09-25 19:05           ` Jan Kratochvil
  2013-09-26  6:27             ` Metzger, Markus T
  0 siblings, 1 reply; 88+ messages in thread
From: Jan Kratochvil @ 2013-09-25 19:05 UTC (permalink / raw)
  To: Metzger, Markus T; +Cc: gdb-patches, Pedro Alves

On Mon, 23 Sep 2013 09:09:18 +0200, Metzger, Markus T wrote:
> > > I need to distinguish different types of errors, e.g. overflow and not
> > supported.
> > 
> > Then use enum.  There is for example:
> > enum return_reason
> >   {
> >     /* User interrupt.  */
> >     RETURN_QUIT = -2,
> >     /* Any other error.  */
> >     RETURN_ERROR
> >   };
> > 
> > One could even throw and catch specific exceptions (enum errors) but I find
> > that needlessly overcomplicated when we just return to the immediate
> > caller.
> 
> In addition to errors I defined myself, I might get errors from the system call,
> e.g. ENOMEM, EOPNOTSUPP, ENOSYS.

I do not see such system call.  linux_read_btrace can ever return only 0,
-EOVERFLOW or -ENOSYS and nothing else.  It never returns for example
variable value like "-errno".


> For the not-available function, for
> example, I'm just mimicking the error that would be returned by the system
> call on systems where that call is not available.

This is not GDB style, it probably comes from Linux kernel.  GDB code should
not needlessly depend on any system E* macros as it reduces portability
(these are linux-* files but still).


> Do you want me to translate those into an enum?

As it can return only 0, -EOVERFLOW and -ENOSYS yes, I find enum as the best
one.


Thanks,
Jan

^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [patch v4 20/24] btrace, gdbserver: read branch trace incrementally
  2013-09-25 19:05           ` Jan Kratochvil
@ 2013-09-26  6:27             ` Metzger, Markus T
  0 siblings, 0 replies; 88+ messages in thread
From: Metzger, Markus T @ 2013-09-26  6:27 UTC (permalink / raw)
  To: Jan Kratochvil; +Cc: gdb-patches, Pedro Alves

> -----Original Message-----
> From: gdb-patches-owner@sourceware.org [mailto:gdb-patches-
> owner@sourceware.org] On Behalf Of Jan Kratochvil


> > > > I need to distinguish different types of errors, e.g. overflow and not
> > > supported.
> > >
> > > Then use enum.  There is for example:
> > > enum return_reason
> > >   {
> > >     /* User interrupt.  */
> > >     RETURN_QUIT = -2,
> > >     /* Any other error.  */
> > >     RETURN_ERROR
> > >   };
> > >
> > > One could even throw and catch specific exceptions (enum errors) but I
> find
> > > that needlessly overcomplicated when we just return to the immediate
> > > caller.
> >
> > In addition to errors I defined myself, I might get errors from the system
> call,
> > e.g. ENOMEM, EOPNOTSUPP, ENOSYS.
> 
> I do not see such system call.  linux_read_btrace can ever return only 0,
> -EOVERFLOW or -ENOSYS and nothing else.  It never returns for example
> variable value like "-errno".

The system call is in linux_enable_btrace where I use a similar scheme.


> > Do you want me to translate those into an enum?
> 
> As it can return only 0, -EOVERFLOW and -ENOSYS yes, I find enum as the
> best
> one.

OK.  I'll wait for more feedback on the other patches in v6 before sending
the v7 version with those changes.

Regards,
Markus.
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 18/24] record-btrace: extend unwinder
  2013-09-16 11:21     ` Metzger, Markus T
@ 2013-09-27 13:55       ` Jan Kratochvil
  2013-09-30  9:45         ` Metzger, Markus T
  0 siblings, 1 reply; 88+ messages in thread
From: Jan Kratochvil @ 2013-09-27 13:55 UTC (permalink / raw)
  To: Metzger, Markus T; +Cc: gdb-patches

[-- Attachment #1: Type: text/plain, Size: 4232 bytes --]

On Mon, 16 Sep 2013 13:21:29 +0200, Metzger, Markus T wrote:
> > > An assertion in get_frame_id at frame.c:340 requires that a frame
> > > provides a stack address.  The record-btrace unwinder can't provide
> > > this since the trace does not contain data.  I incorrectly set
> > > stack_addr_p to 1 to avoid the assertion.
> > 
> > Primarily record-btrace can provide the stack address.  You know $sp at the
> > end of the recoding and you can query .eh_frame/.debug_frame at any PC
> > address what is the difference between $sp and caller's $sp at that exact PC.
> > This assumes either all the involved binaries were built with -fasynchronous-
> > unwind-tables (for .eh_frame) or that debug info (for .debug_frame) is
> > present.  The former is true in Fedora / Red Hat distros, unaware how others.
> 
> This would only hold for functions that have not yet returned to their caller.
> If we go back far enough, the branch trace will also contain functions that
> have already returned to their caller for which we do not have any information.
> I would even argue that this is the majority of functions in the branch trace.

In many cases one can reconstruct $sp.  But for example if alloca() was in use
I see now $sp cannot be reconstructed.  So I agree now GDB has to handle cases
when $sp is not known for a frame_id.

BTW in many cases one realy can reconstruct all past $sp addresses from the
btrace buffer.  I have tried it for one backtrace of /usr/bin/gdb :

It will not work (at least) in two cases:

 * for -O0 code (not -O2) GCC does not produce DW_CFA_def_cfa_offset but it
   provides just:
     DW_CFA_def_cfa_register: r6 (rbp)
   As you describe we do not know $rbp that time anymore.

 * Even in -O2 code if a function uses alloca() GCC will produce again:
     DW_CFA_def_cfa_register: r6 (rbp)

I have no idea in which percentage of real world code the
DW_CFA_def_cfa_offset dependency would work, C++ code CFI may look differently
than GDB in plain C I tried below:

CIE has always:
  DW_CFA_def_cfa: r7 (rsp) ofs 8

#0  0x00007ffff5eef950 in __poll_nocancel
  DW_CFA_def_cfa: r7 (rsp) ofs 8
#1  0x000000000059bb63 in poll
  DW_CFA_def_cfa_offset: 80
#2  gdb_wait_for_event
#3  0x000000000059c2da in gdb_do_one_event
  DW_CFA_def_cfa_offset: 64
#4  0x000000000059c517 in start_event_loop
  DW_CFA_def_cfa_offset: 48
#5  0x00000000005953a3 in captured_command_loop
  DW_CFA_def_cfa_offset: 16
#6  0x00000000005934aa in catch_errors
  DW_CFA_def_cfa_offset: 112
#7  0x000000000059607e in captured_main
  DW_CFA_def_cfa_offset: 192
#8  0x00000000005934aa in catch_errors
  DW_CFA_def_cfa_offset: 112
#9  0x0000000000596c44 in gdb_main
  DW_CFA_def_cfa_offset: 16
#10 0x000000000045526e in main
  DW_CFA_def_cfa_offset: 64

As an obvious check $sp in #10 main 0x7fffffffd940 - $sp in #0 0x7fffffffd6b8:
(gdb) p 0x7fffffffd940 - 0x7fffffffd6b8
$1 = 648
8+80+64+48+16+112+192+112+16 = 648

This is just FYI, I do not ask to implement it.  I do not think knowing just
$sp is too important when it works only sometimes.


> > The current method of constant STACK_ADDR may have some problems with
> > frame_id_inner() but I did not investigate it more.
> 
> By looking at the code, frame_id_inner () should always fail since all btrace
> frames have stack_addr == 0.
> 
> On the other hand, frame_id_inner is only called for frames of type
> NORMAL_FRAME, whereas btrace frames have type BTRACE_FRAME.

OK, I agree now frame_id_inner() is not needed.


> This has meanwhile been resolved.  This had been a side-effect of throwing
> an error in to_fetch_registers.  When I just return, function arguments are
> correctly displayed as unavailable and the "can't compute CFA for this frame"
> message is gone.

With v6 patchset it is only sometimes gone, I still get it.
Tested with (results are the same):
	gcc (GCC) 4.8.2 20130927 (prerelease)
	gcc-4.8.1-10.fc21.x86_64

int f(int i) {
  return i;
}
int main(void) {
  f(1);
  return 0;
}

gcc -o test3 test3.c -Wall -g 
./gdb ./test3 -ex start -ex 'record btrace' -ex step -ex step -ex reverse-step -ex frame
#0  f (i=<error reading variable: can't compute CFA for this frame>) at test3.c:2
2	  return i;
(gdb) _

It gets fixed by the attached patch.


Thanks,
Jan

[-- Attachment #2: cfa.patch --]
[-- Type: text/plain, Size: 1824 bytes --]

diff --git a/gdb/dwarf2-frame.c b/gdb/dwarf2-frame.c
index 2aff23e..518b0b9 100644
--- a/gdb/dwarf2-frame.c
+++ b/gdb/dwarf2-frame.c
@@ -1495,9 +1495,13 @@ dwarf2_frame_base_sniffer (struct frame_info *this_frame)
 CORE_ADDR
 dwarf2_frame_cfa (struct frame_info *this_frame)
 {
+  extern const struct frame_unwind record_btrace_frame_unwind;
+  extern const struct frame_unwind record_btrace_tailcall_frame_unwind;
   while (get_frame_type (this_frame) == INLINE_FRAME)
     this_frame = get_prev_frame (this_frame);
-  if (get_frame_unwind_stop_reason (this_frame) == UNWIND_UNAVAILABLE)
+  if (get_frame_unwind_stop_reason (this_frame) == UNWIND_UNAVAILABLE
+      || frame_unwinder_is (this_frame, &record_btrace_frame_unwind)
+      || frame_unwinder_is (this_frame, &record_btrace_tailcall_frame_unwind))
     throw_error (NOT_AVAILABLE_ERROR,
                 _("can't compute CFA for this frame: "
                   "required registers or memory are unavailable"));
diff --git a/gdb/record-btrace.c b/gdb/record-btrace.c
index d634712..9a4287b 100644
--- a/gdb/record-btrace.c
+++ b/gdb/record-btrace.c
@@ -1217,7 +1217,7 @@ record_btrace_frame_dealloc_cache (struct frame_info *self, void *this_cache)
    Therefore this unwinder reports any possibly unwound registers as
    <unavailable>.  */
 
-static const struct frame_unwind record_btrace_frame_unwind =
+const struct frame_unwind record_btrace_frame_unwind =
 {
   BTRACE_FRAME,
   record_btrace_frame_unwind_stop_reason,
@@ -1228,7 +1228,7 @@ static const struct frame_unwind record_btrace_frame_unwind =
   record_btrace_frame_dealloc_cache
 };
 
-static const struct frame_unwind record_btrace_tailcall_frame_unwind =
+const struct frame_unwind record_btrace_tailcall_frame_unwind =
 {
   BTRACE_TAILCALL_FRAME,
   record_btrace_frame_unwind_stop_reason,

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 23/24] record-btrace: add (reverse-)stepping support
  2013-09-17  9:43     ` Metzger, Markus T
@ 2013-09-29 17:24       ` Jan Kratochvil
  2013-09-30  9:30         ` Metzger, Markus T
  0 siblings, 1 reply; 88+ messages in thread
From: Jan Kratochvil @ 2013-09-29 17:24 UTC (permalink / raw)
  To: Metzger, Markus T; +Cc: gdb-patches

On Tue, 17 Sep 2013 11:43:28 +0200, Metzger, Markus T wrote:
> But this code compares a NORMAL_FRAME from before the step with a
> BTRACE_FRAME from after the wait.  They will always be unequal hence
> we will never recognize that we just reverse-stepped into a function.
> 
> When I reset the frame cache, GDB re-computes the stored frame and now
> compares two BTRACE_FRAMEs, which works OK.
[...]
> See above.  Alternatively, I might add a special case to frame comparison,
> but this would be quite ugly, as well.  Do you have a better idea?

+record_btrace_start_replaying (struct thread_info *tp)
[...]
+  /* Make sure we're not using any stale registers.  */
+  registers_changed_ptid (tp->ptid);
+
+  /* We just started replaying.  The frame id cached for stepping is based
+     on unwinding, not on branch tracing.  Recompute it.  */
+  frame = get_current_frame_nocheck ();
+  insn = btrace_insn_get (replay);
+  sal = find_pc_line (insn->pc, 0);
+  set_step_info (frame, sal);

The problem comes from the new commands like "record goto" which change
inferior content without resuming+stopping it.

Former "record full" could only change history position by "step/reverse-step"
(or similar commands) which did resume+stop the inferior.

To make the "record goto" command friendly to the GDB infrastructure
expectations I think you should put a temporary breakpoint to the target
instruction, resume the inferior and simulate stop at the temporary
breakpoint.

I think all the registers_changed_ptid() calls could be removed afterwards.


> > Proposing some hacked draft patch but for some testcases it FAILs for me;
> > but they FAIL even without this patch as I run it on Nehalem.  I understand I
> > may miss some problem there, though.
> > 
> > 
> > > It looks like I don't need any special support for breakpoints.  Is
> > > there a scenario where normal breakpoints won't work?
> > 
> > You already handle it specially in BTHR_CONT and in BTHR_RCONT by
> > breakpoint_here_p.  As btrace does not record any data changes that may
> > be enough.  "record full" has different situation as it records data changes.
> > I think it is fine as you wrote it.
> > 
> > You could handle BTHR_CONT and BTHR_RCONT equally to BTHR_STEP and
> > BTHR_RSTEP, just returning TARGET_WAITKIND_SPURIOUS instead of
> > TARGET_WAITKIND_STOPPED.
> > This way you would not need to handle specially breakpoint_here_p.
> > But it would be sure slower.
> 
> I don't think performance is an issue, here.  I tried that and it didn't seem
> to stop correctly resulting in lots of test fails.  I have not investigated it.

My idea was wrong, handle_inferior_event checks for
breakpoint_inserted_here_p() only if it sees GDB_SIGNAL_TRAP.  With
TARGET_WAITKIND_SPURIOUS it does not notice any breakpoint.

(One could return TARGET_WAITKIND_SPURIOUS instead of looping in
BTHR_CONT+BTHR_RCONT but that has no advantage, it is just slower.)

And sure reporting GDB_SIGNAL_TRAP without breakpoint_inserted_here_p() also
does not work, that ends up with:
	Program received signal SIGTRAP, Trace/breakpoint trap.

So I agree with your implementation, record-full.c also does it that way.


> > > Non-stop mode is not working.  Do not allow record-btrace in non-stop
> > mode.
> > 
> > While that seems OK for the initial check-in I do not think it is convenient.
> > Some users use for example Eclipse in non-stop mode, they would not be
> > able to use btrace then as one cannot change non-stop state when the
> > inferior is running.  You can just disable the ALL_THREADS cases in record-
> > btrace.c, can't you?
> 
> Record-full is not supporting non-stop, either.  I'm wondering what other
> issues I might run into with non-stop mode that I am currently not aware of.

I do not know an answer without trying it.


Thanks,
Jan

^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [patch v4 23/24] record-btrace: add (reverse-)stepping support
  2013-09-29 17:24       ` Jan Kratochvil
@ 2013-09-30  9:30         ` Metzger, Markus T
  2013-09-30 10:25           ` Jan Kratochvil
  0 siblings, 1 reply; 88+ messages in thread
From: Metzger, Markus T @ 2013-09-30  9:30 UTC (permalink / raw)
  To: Jan Kratochvil; +Cc: gdb-patches

> -----Original Message-----
> From: gdb-patches-owner@sourceware.org [mailto:gdb-patches-
> owner@sourceware.org] On Behalf Of Jan Kratochvil


> > But this code compares a NORMAL_FRAME from before the step with a
> > BTRACE_FRAME from after the wait.  They will always be unequal hence
> > we will never recognize that we just reverse-stepped into a function.
> >
> > When I reset the frame cache, GDB re-computes the stored frame and
> now
> > compares two BTRACE_FRAMEs, which works OK.
> [...]
> > See above.  Alternatively, I might add a special case to frame comparison,
> > but this would be quite ugly, as well.  Do you have a better idea?
> 
> +record_btrace_start_replaying (struct thread_info *tp)
> [...]
> +  /* Make sure we're not using any stale registers.  */
> +  registers_changed_ptid (tp->ptid);
> +
> +  /* We just started replaying.  The frame id cached for stepping is based
> +     on unwinding, not on branch tracing.  Recompute it.  */
> +  frame = get_current_frame_nocheck ();
> +  insn = btrace_insn_get (replay);
> +  sal = find_pc_line (insn->pc, 0);
> +  set_step_info (frame, sal);
> 
> The problem comes from the new commands like "record goto" which
> change
> inferior content without resuming+stopping it.
> 
> Former "record full" could only change history position by "step/reverse-
> step"
> (or similar commands) which did resume+stop the inferior.
> 
> To make the "record goto" command friendly to the GDB infrastructure
> expectations I think you should put a temporary breakpoint to the target
> instruction, resume the inferior and simulate stop at the temporary
> breakpoint.
> 
> I think all the registers_changed_ptid() calls could be removed afterwards.

That would cause quite some overhead if we're moving by a big number
of instructions.

First, we'd single-step instead of just setting the PC.  Second, I'd need to
examine all instruction addresses on the way in order to compute the ignore
count of that temporary breakpoint.

Record full needs to single-step in order to restore the memory and
register contents.  But for record btrace, this would be completely
artificial.  I don't think we should do it this way.  Could we maybe polish
my solution so it is more in line with the rest of GDB?


Regards,
Markus.

Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052

^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [patch v4 18/24] record-btrace: extend unwinder
  2013-09-27 13:55       ` Jan Kratochvil
@ 2013-09-30  9:45         ` Metzger, Markus T
  2013-09-30 10:26           ` Jan Kratochvil
  0 siblings, 1 reply; 88+ messages in thread
From: Metzger, Markus T @ 2013-09-30  9:45 UTC (permalink / raw)
  To: Jan Kratochvil; +Cc: gdb-patches

> -----Original Message-----
> From: Jan Kratochvil [mailto:jan.kratochvil@redhat.com]
> Sent: Friday, September 27, 2013 3:55 PM


> > This has meanwhile been resolved.  This had been a side-effect of throwing
> > an error in to_fetch_registers.  When I just return, function arguments are
> > correctly displayed as unavailable and the "can't compute CFA for this
> frame"
> > message is gone.
> 
> With v6 patchset it is only sometimes gone, I still get it.
> Tested with (results are the same):
> 	gcc (GCC) 4.8.2 20130927 (prerelease)
> 	gcc-4.8.1-10.fc21.x86_64
> 
> int f(int i) {
>   return i;
> }
> int main(void) {
>   f(1);
>   return 0;
> }
> 
> gcc -o test3 test3.c -Wall -g
> ./gdb ./test3 -ex start -ex 'record btrace' -ex step -ex step -ex reverse-step -
> ex frame
> #0  f (i=<error reading variable: can't compute CFA for this frame>) at
> test3.c:2
> 2	  return i;
> (gdb) _
> 
> It gets fixed by the attached patch.

Thanks.  I'll incorporate it into the extend unwinder patch.

Given that we always throw an error for BTRACE frames, there's no
need to get the stop reason first or to skip inline frames; they won't
be mixed with BTRACE frames.

Regards,
Markus.

Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 23/24] record-btrace: add (reverse-)stepping support
  2013-09-30  9:30         ` Metzger, Markus T
@ 2013-09-30 10:25           ` Jan Kratochvil
  0 siblings, 0 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-09-30 10:25 UTC (permalink / raw)
  To: Metzger, Markus T; +Cc: gdb-patches

On Mon, 30 Sep 2013 11:30:14 +0200, Metzger, Markus T wrote:
> > -----Original Message-----
> > From: gdb-patches-owner@sourceware.org [mailto:gdb-patches-
> > owner@sourceware.org] On Behalf Of Jan Kratochvil
> 
> 
> > > But this code compares a NORMAL_FRAME from before the step with a
> > > BTRACE_FRAME from after the wait.  They will always be unequal hence
> > > we will never recognize that we just reverse-stepped into a function.
> > >
> > > When I reset the frame cache, GDB re-computes the stored frame and
> > now
> > > compares two BTRACE_FRAMEs, which works OK.
> > [...]
> > > See above.  Alternatively, I might add a special case to frame comparison,
> > > but this would be quite ugly, as well.  Do you have a better idea?
> > 
> > +record_btrace_start_replaying (struct thread_info *tp)
> > [...]
> > +  /* Make sure we're not using any stale registers.  */
> > +  registers_changed_ptid (tp->ptid);
> > +
> > +  /* We just started replaying.  The frame id cached for stepping is based
> > +     on unwinding, not on branch tracing.  Recompute it.  */
> > +  frame = get_current_frame_nocheck ();
> > +  insn = btrace_insn_get (replay);
> > +  sal = find_pc_line (insn->pc, 0);
> > +  set_step_info (frame, sal);
> > 
> > The problem comes from the new commands like "record goto" which
> > change
> > inferior content without resuming+stopping it.
> > 
> > Former "record full" could only change history position by "step/reverse-
> > step"
> > (or similar commands) which did resume+stop the inferior.
> > 
> > To make the "record goto" command friendly to the GDB infrastructure
> > expectations I think you should put a temporary breakpoint to the target
> > instruction, resume the inferior and simulate stop at the temporary
> > breakpoint.
> > 
> > I think all the registers_changed_ptid() calls could be removed afterwards.
> 
> That would cause quite some overhead if we're moving by a big number
> of instructions.
> 
> First, we'd single-step instead of just setting the PC.  Second, I'd need to
> examine all instruction addresses on the way in order to compute the ignore
> count of that temporary breakpoint.

I did not mean single-stepping.  Just do the single to_resume + to_wait where
to_wait will return the new PC.  Unfortunately one has to create a temporary
breakpoint otherwise GDB will print unexpected SIGTRAP but many commands (like
"next" over a function call) create temporary breakpoints.

This way all the actions in current proceed(), handle_inferior_event() etc.
get executed.


Jan

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [patch v4 18/24] record-btrace: extend unwinder
  2013-09-30  9:45         ` Metzger, Markus T
@ 2013-09-30 10:26           ` Jan Kratochvil
  0 siblings, 0 replies; 88+ messages in thread
From: Jan Kratochvil @ 2013-09-30 10:26 UTC (permalink / raw)
  To: Metzger, Markus T; +Cc: gdb-patches

On Mon, 30 Sep 2013 11:44:41 +0200, Metzger, Markus T wrote:
> Thanks.  I'll incorporate it into the extend unwinder patch.
> 
> Given that we always throw an error for BTRACE frames, there's no
> need to get the stop reason first or to skip inline frames; they won't
> be mixed with BTRACE frames.

OK, fine with that.


Thanks,
Jan

^ permalink raw reply	[flat|nested] 88+ messages in thread

end of thread, other threads:[~2013-09-30 10:26 UTC | newest]

Thread overview: 88+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-03  9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
2013-07-03  9:14 ` [patch v4 05/24] record-btrace: start counting at one Markus Metzger
2013-08-18 19:11   ` Jan Kratochvil
2013-07-03  9:14 ` [patch v4 24/24] record-btrace: skip tail calls in back trace Markus Metzger
2013-08-18 19:10   ` Jan Kratochvil
2013-09-17 14:28     ` Metzger, Markus T
2013-09-18  8:28       ` Metzger, Markus T
2013-09-18  9:52         ` Metzger, Markus T
2013-07-03  9:14 ` [patch v4 20/24] btrace, gdbserver: read branch trace incrementally Markus Metzger
2013-08-18 19:09   ` Jan Kratochvil
2013-09-16 12:48     ` Metzger, Markus T
2013-09-22 14:42       ` Jan Kratochvil
2013-09-23  7:09         ` Metzger, Markus T
2013-09-25 19:05           ` Jan Kratochvil
2013-09-26  6:27             ` Metzger, Markus T
2013-07-03  9:14 ` [patch v4 10/24] target: add ops parameter to to_prepare_to_store method Markus Metzger
2013-08-18 19:07   ` Jan Kratochvil
2013-07-03  9:14 ` [patch v4 14/24] record-btrace: provide xfer_partial target method Markus Metzger
2013-08-18 19:08   ` Jan Kratochvil
2013-09-16  9:30     ` Metzger, Markus T
2013-09-22 14:18       ` Jan Kratochvil
2013-07-03  9:14 ` [patch v4 07/24] record-btrace: optionally indent function call history Markus Metzger
2013-08-18 19:06   ` Jan Kratochvil
2013-09-10 13:06     ` Metzger, Markus T
2013-09-10 13:08       ` Jan Kratochvil
2013-07-03  9:14 ` [patch v4 08/24] record-btrace: make ranges include begin and end Markus Metzger
2013-08-18 19:12   ` Jan Kratochvil
2013-07-03  9:14 ` [patch v4 16/24] record-btrace: provide target_find_new_threads method Markus Metzger
2013-08-18 19:15   ` Jan Kratochvil
2013-07-03  9:14 ` [patch v4 11/24] record-btrace: supply register target methods Markus Metzger
2013-08-18 19:07   ` Jan Kratochvil
2013-09-16  9:19     ` Metzger, Markus T
2013-09-22 13:55       ` Jan Kratochvil
2013-09-23  6:55         ` Metzger, Markus T
2013-07-03  9:14 ` [patch v4 02/24] record: upcase record_print_flag enumeration constants Markus Metzger
2013-08-18 19:11   ` Jan Kratochvil
2013-07-03  9:14 ` [patch v4 19/24] btrace, linux: fix memory leak when reading branch trace Markus Metzger
2013-08-18 19:09   ` Jan Kratochvil
2013-07-03  9:14 ` [patch v4 03/24] btrace: change branch trace data structure Markus Metzger
2013-08-18 19:05   ` Jan Kratochvil
2013-09-10  9:11     ` Metzger, Markus T
2013-09-12 20:09       ` Jan Kratochvil
2013-09-16  9:01         ` Metzger, Markus T
2013-09-21 19:44           ` Jan Kratochvil
2013-09-23  6:54             ` Metzger, Markus T
2013-09-23  7:15               ` Jan Kratochvil
2013-09-23  7:27                 ` Metzger, Markus T
2013-09-22 16:57         ` Jan Kratochvil
2013-09-22 17:16           ` Jan Kratochvil
2013-07-03  9:14 ` [patch v4 09/24] btrace: add replay position to btrace thread info Markus Metzger
2013-08-18 19:07   ` Jan Kratochvil
2013-09-10 13:24     ` Metzger, Markus T
2013-09-12 20:19       ` Jan Kratochvil
2013-07-03  9:14 ` [patch v4 22/24] infrun: reverse stepping from unknown functions Markus Metzger
2013-08-18 19:09   ` Jan Kratochvil
2013-07-03  9:15 ` [patch v4 13/24] record-btrace, frame: supply target-specific unwinder Markus Metzger
2013-08-18 19:07   ` Jan Kratochvil
2013-07-03  9:15 ` [patch v4 18/24] record-btrace: extend unwinder Markus Metzger
2013-08-18 19:08   ` Jan Kratochvil
2013-09-16 11:21     ` Metzger, Markus T
2013-09-27 13:55       ` Jan Kratochvil
2013-09-30  9:45         ` Metzger, Markus T
2013-09-30 10:26           ` Jan Kratochvil
2013-07-03  9:15 ` [patch v4 23/24] record-btrace: add (reverse-)stepping support Markus Metzger
2013-08-18 19:09   ` Jan Kratochvil
2013-09-17  9:43     ` Metzger, Markus T
2013-09-29 17:24       ` Jan Kratochvil
2013-09-30  9:30         ` Metzger, Markus T
2013-09-30 10:25           ` Jan Kratochvil
2013-07-03  9:15 ` [patch v4 12/24] frame, backtrace: allow targets to supply a frame unwinder Markus Metzger
2013-08-18 19:14   ` Jan Kratochvil
2013-07-03  9:15 ` [patch v4 01/24] gdbarch: add instruction predicate methods Markus Metzger
2013-07-03  9:49   ` Mark Kettenis
2013-07-03 11:10     ` Metzger, Markus T
2013-08-18 19:04   ` Jan Kratochvil
2013-07-03  9:15 ` [patch v4 17/24] record-btrace: add record goto target methods Markus Metzger
2013-08-18 19:08   ` Jan Kratochvil
2013-07-03  9:15 ` [patch v4 06/24] btrace: increase buffer size Markus Metzger
2013-08-18 19:06   ` Jan Kratochvil
2013-07-03  9:15 ` [patch v4 15/24] record-btrace: add to_wait and to_resume target methods Markus Metzger
2013-08-18 19:08   ` Jan Kratochvil
2013-07-03  9:15 ` [patch v4 04/24] record-btrace: fix insn range in function call history Markus Metzger
2013-08-18 19:06   ` Jan Kratochvil
2013-07-03  9:15 ` [patch v4 21/24] record-btrace: show trace from enable location Markus Metzger
2013-08-18 19:10   ` instruction_history.exp unset variable [Re: [patch v4 21/24] record-btrace: show trace from enable location] Jan Kratochvil
2013-09-16 14:11     ` Metzger, Markus T
2013-08-18 19:16   ` [patch v4 21/24] record-btrace: show trace from enable location Jan Kratochvil
2013-08-18 19:04 ` [patch v4 00/24] record-btrace: reverse Jan Kratochvil

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).