public inbox for gdb-patches@sourceware.org
 help / color / mirror / Atom feed
* [PATCH 1/4] Create new target "kdump" which uses libkdumpfile: https://github.com/ptesarik/libkdumpfile to access contents of compressed kernel dump.
  2016-01-31 21:45 Enable gdb to open Linux kernel dumps Ales Novak
                   ` (2 preceding siblings ...)
  2016-01-31 21:45 ` [PATCH 3/4] Add SLAB allocator understanding Ales Novak
@ 2016-01-31 21:45 ` Ales Novak
  2016-02-04 12:40   ` Pedro Alves
  2016-02-01 11:27 ` Enable gdb to open Linux kernel dumps Kieran Bingham
  4 siblings, 1 reply; 31+ messages in thread
From: Ales Novak @ 2016-01-31 21:45 UTC (permalink / raw)
  To: gdb-patches; +Cc: Ales Novak

---
 LICENSE                              |  340 +++++++
 gdb/Makefile.in                      |   13 +-
 gdb/c-typeprint.c                    |    6 +
 gdb/c-valprint.c                     |    6 +
 gdb/cli/cli-cmds.c                   |   34 +-
 gdb/configure.ac                     |   47 +-
 gdb/data-directory/Makefile.in       |    1 +
 gdb/disasm.c                         |  110 ++
 gdb/disasm.h                         |    1 +
 gdb/kdump.c                          | 1867 ++++++++++++++++++++++++++++++++++
 gdb/mi/mi-out.c                      |    3 +-
 gdb/python/lib/gdb/kdump/__init__.py |   20 +
 gdb/python/py-block.c                |  102 ++
 gdb/typeprint.c                      |    9 +-
 gdb/typeprint.h                      |    2 +
 15 files changed, 2541 insertions(+), 20 deletions(-)
 create mode 100644 LICENSE
 create mode 100644 gdb/kdump.c
 create mode 100644 gdb/python/lib/gdb/kdump/__init__.py

diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000..d6a9326
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,340 @@
+GNU GENERAL PUBLIC LICENSE
+                       Version 2, June 1991
+
+ Copyright (C) 1989, 1991 Free Software Foundation, Inc., <http://fsf.org/>
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+                            Preamble
+
+  The licenses for most software are designed to take away your
+freedom to share and change it.  By contrast, the GNU General Public
+License is intended to guarantee your freedom to share and change free
+software--to make sure the software is free for all its users.  This
+General Public License applies to most of the Free Software
+Foundation's software and to any other program whose authors commit to
+using it.  (Some other Free Software Foundation software is covered by
+the GNU Lesser General Public License instead.)  You can apply it to
+your programs, too.
+
+  When we speak of free software, we are referring to freedom, not
+price.  Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+this service if you wish), that you receive source code or can get it
+if you want it, that you can change the software or use pieces of it
+in new free programs; and that you know you can do these things.
+
+  To protect your rights, we need to make restrictions that forbid
+anyone to deny you these rights or to ask you to surrender the rights.
+These restrictions translate to certain responsibilities for you if you
+distribute copies of the software, or if you modify it.
+
+  For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must give the recipients all the rights that
+you have.  You must make sure that they, too, receive or can get the
+source code.  And you must show them these terms so they know their
+rights.
+
+  We protect your rights with two steps: (1) copyright the software, and
+(2) offer you this license which gives you legal permission to copy,
+distribute and/or modify the software.
+
+  Also, for each author's protection and ours, we want to make certain
+that everyone understands that there is no warranty for this free
+software.  If the software is modified by someone else and passed on, we
+want its recipients to know that what they have is not the original, so
+that any problems introduced by others will not reflect on the original
+authors' reputations.
+
+  Finally, any free program is threatened constantly by software
+patents.  We wish to avoid the danger that redistributors of a free
+program will individually obtain patent licenses, in effect making the
+program proprietary.  To prevent this, we have made it clear that any
+patent must be licensed for everyone's free use or not licensed at all.
+
+  The precise terms and conditions for copying, distribution and
+modification follow.
+
+                    GNU GENERAL PUBLIC LICENSE
+   TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+  0. This License applies to any program or other work which contains
+a notice placed by the copyright holder saying it may be distributed
+under the terms of this General Public License.  The "Program", below,
+refers to any such program or work, and a "work based on the Program"
+means either the Program or any derivative work under copyright law:
+that is to say, a work containing the Program or a portion of it,
+either verbatim or with modifications and/or translated into another
+language.  (Hereinafter, translation is included without limitation in
+the term "modification".)  Each licensee is addressed as "you".
+
+Activities other than copying, distribution and modification are not
+covered by this License; they are outside its scope.  The act of
+running the Program is not restricted, and the output from the Program
+is covered only if its contents constitute a work based on the
+Program (independent of having been made by running the Program).
+Whether that is true depends on what the Program does.
+
+  1. You may copy and distribute verbatim copies of the Program's
+source code as you receive it, in any medium, provided that you
+conspicuously and appropriately publish on each copy an appropriate
+copyright notice and disclaimer of warranty; keep intact all the
+notices that refer to this License and to the absence of any warranty;
+and give any other recipients of the Program a copy of this License
+along with the Program.
+
+You may charge a fee for the physical act of transferring a copy, and
+you may at your option offer warranty protection in exchange for a fee.
+
+  2. You may modify your copy or copies of the Program or any portion
+of it, thus forming a work based on the Program, and copy and
+distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+
+    a) You must cause the modified files to carry prominent notices
+    stating that you changed the files and the date of any change.
+
+    b) You must cause any work that you distribute or publish, that in
+    whole or in part contains or is derived from the Program or any
+    part thereof, to be licensed as a whole at no charge to all third
+    parties under the terms of this License.
+
+    c) If the modified program normally reads commands interactively
+    when run, you must cause it, when started running for such
+    interactive use in the most ordinary way, to print or display an
+    announcement including an appropriate copyright notice and a
+    notice that there is no warranty (or else, saying that you provide
+    a warranty) and that users may redistribute the program under
+    these conditions, and telling the user how to view a copy of this
+    License.  (Exception: if the Program itself is interactive but
+    does not normally print such an announcement, your work based on
+    the Program is not required to print an announcement.)
+
+These requirements apply to the modified work as a whole.  If
+identifiable sections of that work are not derived from the Program,
+and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those
+sections when you distribute them as separate works.  But when you
+distribute the same sections as part of a whole which is a work based
+on the Program, the distribution of the whole must be on the terms of
+this License, whose permissions for other licensees extend to the
+entire whole, and thus to each and every part regardless of who wrote it.
+
+Thus, it is not the intent of this section to claim rights or contest
+your rights to work written entirely by you; rather, the intent is to
+exercise the right to control the distribution of derivative or
+collective works based on the Program.
+
+In addition, mere aggregation of another work not based on the Program
+with the Program (or with a work based on the Program) on a volume of
+a storage or distribution medium does not bring the other work under
+the scope of this License.
+
+  3. You may copy and distribute the Program (or a work based on it,
+under Section 2) in object code or executable form under the terms of
+Sections 1 and 2 above provided that you also do one of the following:
+
+    a) Accompany it with the complete corresponding machine-readable
+    source code, which must be distributed under the terms of Sections
+    1 and 2 above on a medium customarily used for software interchange; or,
+
+    b) Accompany it with a written offer, valid for at least three
+    years, to give any third party, for a charge no more than your
+    cost of physically performing source distribution, a complete
+    machine-readable copy of the corresponding source code, to be
+    distributed under the terms of Sections 1 and 2 above on a medium
+    customarily used for software interchange; or,
+
+    c) Accompany it with the information you received as to the offer
+    to distribute corresponding source code.  (This alternative is
+    allowed only for noncommercial distribution and only if you
+    received the program in object code or executable form with such
+    an offer, in accord with Subsection b above.)
+
+The source code for a work means the preferred form of the work for
+making modifications to it.  For an executable work, complete source
+code means all the source code for all modules it contains, plus any
+associated interface definition files, plus the scripts used to
+control compilation and installation of the executable.  However, as a
+special exception, the source code distributed need not include
+anything that is normally distributed (in either source or binary
+form) with the major components (compiler, kernel, and so on) of the
+operating system on which the executable runs, unless that component
+itself accompanies the executable.
+
+If distribution of executable or object code is made by offering
+access to copy from a designated place, then offering equivalent
+access to copy the source code from the same place counts as
+distribution of the source code, even though third parties are not
+compelled to copy the source along with the object code.
+
+  4. You may not copy, modify, sublicense, or distribute the Program
+except as expressly provided under this License.  Any attempt
+otherwise to copy, modify, sublicense or distribute the Program is
+void, and will automatically terminate your rights under this License.
+However, parties who have received copies, or rights, from you under
+this License will not have their licenses terminated so long as such
+parties remain in full compliance.
+
+  5. You are not required to accept this License, since you have not
+signed it.  However, nothing else grants you permission to modify or
+distribute the Program or its derivative works.  These actions are
+prohibited by law if you do not accept this License.  Therefore, by
+modifying or distributing the Program (or any work based on the
+Program), you indicate your acceptance of this License to do so, and
+all its terms and conditions for copying, distributing or modifying
+the Program or works based on it.
+
+  6. Each time you redistribute the Program (or any work based on the
+Program), the recipient automatically receives a license from the
+original licensor to copy, distribute or modify the Program subject to
+these terms and conditions.  You may not impose any further
+restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties to
+this License.
+
+  7. If, as a consequence of a court judgment or allegation of patent
+infringement or for any other reason (not limited to patent issues),
+conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License.  If you cannot
+distribute so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you
+may not distribute the Program at all.  For example, if a patent
+license would not permit royalty-free redistribution of the Program by
+all those who receive copies directly or indirectly through you, then
+the only way you could satisfy both it and this License would be to
+refrain entirely from distribution of the Program.
+
+If any portion of this section is held invalid or unenforceable under
+any particular circumstance, the balance of the section is intended to
+apply and the section as a whole is intended to apply in other
+circumstances.
+
+It is not the purpose of this section to induce you to infringe any
+patents or other property right claims or to contest validity of any
+such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system, which is
+implemented by public license practices.  Many people have made
+generous contributions to the wide range of software distributed
+through that system in reliance on consistent application of that
+system; it is up to the author/donor to decide if he or she is willing
+to distribute software through any other system and a licensee cannot
+impose that choice.
+
+This section is intended to make thoroughly clear what is believed to
+be a consequence of the rest of this License.
+
+  8. If the distribution and/or use of the Program is restricted in
+certain countries either by patents or by copyrighted interfaces, the
+original copyright holder who places the Program under this License
+may add an explicit geographical distribution limitation excluding
+those countries, so that distribution is permitted only in or among
+countries not thus excluded.  In such case, this License incorporates
+the limitation as if written in the body of this License.
+
+  9. The Free Software Foundation may publish revised and/or new versions
+of the General Public License from time to time.  Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+Each version is given a distinguishing version number.  If the Program
+specifies a version number of this License which applies to it and "any
+later version", you have the option of following the terms and conditions
+either of that version or of any later version published by the Free
+Software Foundation.  If the Program does not specify a version number of
+this License, you may choose any version ever published by the Free Software
+Foundation.
+
+  10. If you wish to incorporate parts of the Program into other free
+programs whose distribution conditions are different, write to the author
+to ask for permission.  For software which is copyrighted by the Free
+Software Foundation, write to the Free Software Foundation; we sometimes
+make exceptions for this.  Our decision will be guided by the two goals
+of preserving the free status of all derivatives of our free software and
+of promoting the sharing and reuse of software generally.
+
+                            NO WARRANTY
+
+  11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
+FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
+OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
+PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
+OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
+TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
+PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
+REPAIR OR CORRECTION.
+
+  12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
+REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
+INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
+OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
+TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
+YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
+PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGES.
+
+                     END OF TERMS AND CONDITIONS
+
+            How to Apply These Terms to Your New Programs
+
+  If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+  To do so, attach the following notices to the program.  It is safest
+to attach them to the start of each source file to most effectively
+convey the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+    {description}
+    Copyright (C) {year}  {fullname}
+
+    This program is free software; you can redistribute it and/or modify
+    it under the terms of the GNU General Public License as published by
+    the Free Software Foundation; either version 2 of the License, or
+    (at your option) any later version.
+
+    This program is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+    GNU General Public License for more details.
+
+    You should have received a copy of the GNU General Public License along
+    with this program; if not, write to the Free Software Foundation, Inc.,
+    51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+
+Also add information on how to contact you by electronic and paper mail.
+
+If the program is interactive, make it output a short notice like this
+when it starts in an interactive mode:
+
+    Gnomovision version 69, Copyright (C) year name of author
+    Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+    This is free software, and you are welcome to redistribute it
+    under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License.  Of course, the commands you use may
+be called something other than `show w' and `show c'; they could even be
+mouse-clicks or menu items--whatever suits your program.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a "copyright disclaimer" for the program, if
+necessary.  Here is a sample; alter the names:
+
+  Yoyodyne, Inc., hereby disclaims all copyright interest in the program
+  `Gnomovision' (which makes passes at compilers) written by James Hacker.
+
+  {signature of Ty Coon}, 1 April 1989
+  Ty Coon, President of Vice
+
+This General Public License does not permit incorporating your program into
+proprietary programs.  If your program is a subroutine library, you may
+consider it more useful to permit linking proprietary applications with the
+library.  If this is what you want to do, use the GNU Lesser General
+Public License instead of this License.
+
diff --git a/gdb/Makefile.in b/gdb/Makefile.in
index dfaa8a3..3c7518a 100644
--- a/gdb/Makefile.in
+++ b/gdb/Makefile.in
@@ -136,6 +136,7 @@ CC_LD=$(COMPILER)
 INCLUDE_DIR =  $(srcdir)/../include
 INCLUDE_CFLAGS = -I$(INCLUDE_DIR)
 
+LIBKDUMPFILE_LDFLAGS = @LIBKDUMPFILE_LDFLAGS@
 # Where is the "-liberty" library?  Typically in ../libiberty.
 LIBIBERTY = ../libiberty/libiberty.a
 
@@ -533,7 +534,7 @@ CONFIG_UNINSTALL = @CONFIG_UNINSTALL@
 HAVE_NATIVE_GCORE_TARGET = @HAVE_NATIVE_GCORE_TARGET@
 
 # -I. for config files.
-# -I$(srcdir) for gdb internal headers.
+# >I$(srcdir) for gdb internal headers.
 # -I$(srcdir)/config for more generic config files.
 
 # It is also possible that you will need to add -I/usr/include/sys if
@@ -558,7 +559,7 @@ CFLAGS = @CFLAGS@
 # are sometimes a little generic, we think that the risk of collision
 # with other header files is high.  If that happens, we try to mitigate
 # a bit the consequences by putting the Python includes last in the list.
-INTERNAL_CPPFLAGS = @CPPFLAGS@ @GUILE_CPPFLAGS@ @PYTHON_CPPFLAGS@
+INTERNAL_CPPFLAGS = @CPPFLAGS@ @GUILE_CPPFLAGS@ @PYTHON_CPPFLAGS@ @LIBKDUMPFILE_CPPFLAGS@
 
 # Need to pass this to testsuite for "make check".  Probably should be
 # consistent with top-level Makefile.in and gdb/testsuite/Makefile.in
@@ -570,7 +571,7 @@ INTERNAL_CFLAGS_BASE = \
 	$(CFLAGS) $(GLOBAL_CFLAGS) $(PROFILE_CFLAGS) \
 	$(GDB_CFLAGS) $(OPCODES_CFLAGS) $(READLINE_CFLAGS) $(ZLIBINC) \
 	$(BFD_CFLAGS) $(INCLUDE_CFLAGS) $(LIBDECNUMBER_CFLAGS) \
-	$(INTL_CFLAGS) $(INCGNU) $(ENABLE_CFLAGS) $(INTERNAL_CPPFLAGS)
+	$(INTL_CFLAGS) $(INCGNU) $(ENABLE_CFLAGS) $(INTERNAL_CPPFLAGS) 
 INTERNAL_WARN_CFLAGS = $(INTERNAL_CFLAGS_BASE) $(GDB_WARN_CFLAGS)
 INTERNAL_CFLAGS = $(INTERNAL_WARN_CFLAGS) $(GDB_WERROR_CFLAGS)
 
@@ -594,7 +595,7 @@ CLIBS = $(SIM) $(READLINE) $(OPCODES) $(BFD) $(ZLIB) $(INTL) $(LIBIBERTY) $(LIBD
 	$(XM_CLIBS) $(NAT_CLIBS) $(GDBTKLIBS) \
 	@LIBS@ @GUILE_LIBS@ @PYTHON_LIBS@ \
 	$(LIBEXPAT) $(LIBLZMA) $(LIBBABELTRACE) $(LIBIPT) \
-	$(LIBIBERTY) $(WIN32LIBS) $(LIBGNU)
+	$(LIBIBERTY) $(WIN32LIBS) $(LIBGNU) $(LIBKDUMPFILE_LDFLAGS)
 CDEPS = $(XM_CDEPS) $(NAT_CDEPS) $(SIM) $(BFD) $(READLINE_DEPS) \
 	$(OPCODES) $(INTL_DEPS) $(LIBIBERTY) $(CONFIG_DEPS) $(LIBGNU)
 
@@ -833,7 +834,7 @@ SFILES = ada-exp.y ada-lang.c ada-typeprint.c ada-valprint.c ada-tasks.c \
 	build-id.c buildsym.c \
 	c-exp.y c-lang.c c-typeprint.c c-valprint.c c-varobj.c \
 	charset.c common/cleanups.c cli-out.c coffread.c coff-pe-read.c \
-	complaints.c completer.c continuations.c corefile.c corelow.c \
+	complaints.c completer.c continuations.c corefile.c corelow.c @KDUMP_SOURCES@ \
 	cp-abi.c cp-support.c cp-namespace.c cp-valprint.c \
 	d-exp.y d-lang.c d-valprint.c \
 	cp-name-parser.y \
@@ -1021,7 +1022,7 @@ COMMON_OBS = $(DEPFILES) $(CONFIG_OBS) $(YYOBJ) \
 	blockframe.o breakpoint.o break-catch-sig.o break-catch-throw.o \
 	break-catch-syscall.o \
 	findvar.o regcache.o cleanups.o \
-	charset.o continuations.o corelow.o disasm.o dummy-frame.o dfp.o \
+	charset.o continuations.o corelow.o @KDUMP_OBJS@ disasm.o dummy-frame.o dfp.o \
 	source.o value.o eval.o valops.o valarith.o valprint.o printcmd.o \
 	block.o symtab.o psymtab.o symfile.o symfile-debug.o symmisc.o \
 	linespec.o dictionary.o \
diff --git a/gdb/c-typeprint.c b/gdb/c-typeprint.c
index 421b720..283612f 100644
--- a/gdb/c-typeprint.c
+++ b/gdb/c-typeprint.c
@@ -1105,6 +1105,12 @@ c_type_print_base (struct type *type, struct ui_file *stream,
 		      }
 		  }
 
+		if (flags->print_offsets == 1) {
+		  if (TYPE_CODE(type) == TYPE_CODE_STRUCT)
+   		    fprintf_filtered (stream, "[0x%03x] ", TYPE_FIELD(type, i).loc.bitpos >> 3);
+		  else
+   		    fprintf_filtered (stream, "         ");
+		}
 		print_spaces_filtered (level + 4, stream);
 		if (field_is_static (&TYPE_FIELD (type, i)))
 		  fprintf_filtered (stream, "static ");
diff --git a/gdb/c-valprint.c b/gdb/c-valprint.c
index 8d8b744..89e86df 100644
--- a/gdb/c-valprint.c
+++ b/gdb/c-valprint.c
@@ -270,6 +270,12 @@ c_val_print (struct type *type, const gdb_byte *valaddr,
 	{
 	  int want_space;
 
+		if (TYPE_CODE(elttype) == TYPE_CODE_STRUCT) {
+			if (TYPE_TAG_NAME(elttype))
+				fprintf_filtered (stream, _("(struct %s*) "), TYPE_TAG_NAME(elttype));
+			else if (TYPE_NAME(elttype))
+				fprintf_filtered (stream, _("(struct %s*) "), TYPE_NAME(elttype));
+		}
 	  addr = unpack_pointer (type, valaddr + embedded_offset);
 	print_unpacked_pointer:
 
diff --git a/gdb/cli/cli-cmds.c b/gdb/cli/cli-cmds.c
index 2ec2dd3..734b3f7 100644
--- a/gdb/cli/cli-cmds.c
+++ b/gdb/cli/cli-cmds.c
@@ -55,6 +55,7 @@
 #endif
 
 #include <fcntl.h>
+#include "mi/mi-out.h"
 
 /* Prototypes for local command functions */
 
@@ -1092,17 +1093,28 @@ print_disassembly (struct gdbarch *gdbarch, const char *name,
   if (!tui_is_window_visible (DISASSEM_WIN))
 #endif
     {
-      printf_filtered ("Dump of assembler code ");
-      if (name != NULL)
-        printf_filtered ("for function %s:\n", name);
-      else
-        printf_filtered ("from %s to %s:\n",
-			 paddress (gdbarch, low), paddress (gdbarch, high));
+      if (flags & DISASSEMBLY_HACK) {
+	struct ui_out *out;
+	out = mi_out_new (1);
+	//mi_begin(out, ui_out_type_tuple, 0, "aaaaa");
+	ui_out_begin (out, ui_out_type_tuple, NULL);
+	gdb_disassembly (gdbarch, out, 0, flags & ~DISASSEMBLY_OMIT_FNAME, -1, low, high);
+	ui_out_end (out, ui_out_type_tuple);
+	mi_out_put (out, gdb_stdout);
+	ui_out_destroy (out);
+      } else {
+	printf_filtered ("Dump of assembler code ");
+	if (name != NULL)
+	  printf_filtered ("for function %s:\n", name);
+	else
+	  printf_filtered ("from %s to %s:\n",
+			   paddress (gdbarch, low), paddress (gdbarch, high));
 
-      /* Dump the specified range.  */
-      gdb_disassembly (gdbarch, current_uiout, 0, flags, -1, low, high);
+	/* Dump the specified range.  */
+	gdb_disassembly (gdbarch, current_uiout, 0, flags, -1, low, high);
 
-      printf_filtered ("End of assembler dump.\n");
+	printf_filtered ("End of assembler dump.\n");
+      }
       gdb_flush (gdb_stdout);
     }
 #if defined(TUI)
@@ -1186,6 +1198,9 @@ disassemble_command (char *arg, int from_tty)
 	    case 'r':
 	      flags |= DISASSEMBLY_RAW_INSN;
 	      break;
+	    case 'h':
+	      flags |= DISASSEMBLY_HACK;
+	      break;
 	    default:
 	      error (_("Invalid disassembly modifier."));
 	    }
@@ -1856,6 +1871,7 @@ Disassemble a specified section of memory.\n\
 Default is the function surrounding the pc of the selected frame.\n\
 With a /m modifier, source lines are included (if available).\n\
 With a /r modifier, raw instructions in hex are included.\n\
+With a /h modifier, MI2-formatted extra informations are dumped.\n\
 With a single argument, the function surrounding that address is dumped.\n\
 Two arguments (separated by a comma) are taken as a range of memory to dump,\n\
   in the form of \"start,end\", or \"start,+length\".\n\
diff --git a/gdb/configure.ac b/gdb/configure.ac
index a40860a..41f5e04 100644
--- a/gdb/configure.ac
+++ b/gdb/configure.ac
@@ -1032,7 +1032,7 @@ AM_CONDITIONAL(HAVE_PYTHON, test "${have_libpython}" != no)
 # -------------------- #
 # Check for libguile.  #
 # -------------------- #
-
+data
 dnl Utility to simplify finding libguile.
 dnl $1 = pkg-config-program
 dnl $2 = space-separate list of guile versions to try
@@ -2465,6 +2465,51 @@ else
   fi
 fi
 
+AC_ARG_WITH(libkdumpfile,
+  AC_HELP_STRING([--with-libkdumpfile], [include support for linux kernel dumps (auto/yes/no)]),
+  [], [with_libkdumpfile=auto])
+AC_MSG_CHECKING([whether to use libkdumpfile])
+AC_MSG_RESULT([$with_libkdumpfile])
+
+if test "x$with_libkdumpfile" = "xno"; then
+  AC_MSG_WARN([linux kernel dumps support disabled; GDB is unable to read kdump.])
+else
+  saved_CFLAGS=$CFLAGS
+  CFLAGS="$CFLAGS -Werror"
+  AC_LIB_HAVE_LINKFLAGS([kdumpfile], [],
+			[#include <kdumpfile.h>
+			static kdump_ctx *dump_ctx;])
+  CFLAGS=$saved_CFLAGS
+
+  new_CPPFLAGS=`${pkg_config} --cflags libkdumpfile`
+  if test $? != 0; then
+    AC_MSG_WARN([failure running pkg-config --cflags libkdumpfile}])
+  fi
+  new_LIBS=`${pkg_config} --libs libkdumpfile`
+  if test $? != 0; then
+    AC_MSG_WARN([failure running pkg-config --libs libkdumpfile}])
+  fi
+   
+  if test "$HAVE_LIBKDUMPFILE" != yes; then
+     if test "$with_libkdumpfile" = yes; then
+       AC_MSG_ERROR([libkdumpfile is missing or unusable])
+     else
+       AC_MSG_WARN([libkdumpfile is missing or unusable; GDB is unable to kdump.])
+     fi
+  else
+    LIBKDUMPFILE_CPPFLAGS=${new_CPPFLAGS}    	
+    LIBKDUMPFILE_LDFLAGS=${new_LIBS}    	
+    KDUMP_SOURCES=kdump.c
+    KDUMP_OBJS=kdump.o
+    AC_SUBST(KDUMP_OBJS)
+    AC_SUBST(KDUMP_SOURCES)
+    AC_SUBST(LIBKDUMPFILE_CPPFLAGS)
+    AC_SUBST(LIBKDUMPFILE_LDFLAGS)
+  fi
+fi
+
+
+
 # If nativefile (NAT_FILE) is not set in config/*/*.m[ht] files, we link
 # to an empty version.
 
diff --git a/gdb/data-directory/Makefile.in b/gdb/data-directory/Makefile.in
index abca534..fe2733d 100644
--- a/gdb/data-directory/Makefile.in
+++ b/gdb/data-directory/Makefile.in
@@ -73,6 +73,7 @@ PYTHON_FILE_LIST = \
 	gdb/command/pretty_printers.py \
 	gdb/command/prompt.py \
 	gdb/command/explore.py \
+	gdb/kdump/__init__.py \
 	gdb/function/__init__.py \
 	gdb/function/caller_is.py \
 	gdb/function/strfns.py \
diff --git a/gdb/disasm.c b/gdb/disasm.c
index 483df01..8304769 100644
--- a/gdb/disasm.c
+++ b/gdb/disasm.c
@@ -24,6 +24,10 @@
 #include "disasm.h"
 #include "gdbcore.h"
 #include "dis-asm.h"
+#include "ui-out.h"
+#include "c-lang.h"
+#include "block.h"
+#include "typeprint.h"
 
 /* Disassemble functions.
    FIXME: We should get rid of all the duplicate code in gdb that does
@@ -92,6 +96,42 @@ compare_lines (const void *mle1p, const void *mle2p)
   return val;
 }
 
+
+struct lst {
+  const void *data;
+  struct lst *next;
+};
+
+static void lst_new(struct lst **l)
+{
+  *l = NULL;
+}
+static struct lst *lst_add(struct lst *l, const void *data)
+{
+  struct lst *n = malloc(sizeof (*n));
+  n->next = l;
+  n->data = data;
+  return n;
+}
+
+static int lst_has(struct lst *l, const void *data)
+{
+  for (; l != NULL && l->data != data; l = l->next);
+  return l?1:0;
+}
+
+static void lst_free(struct lst **l)
+{
+  struct lst *n;
+
+  while(*l) {
+    n = (*l)->next;
+    free (* l);
+    *l = NULL;
+    l = &n;
+  }                                                                                                                                                          
+}
+
 static int
 dump_insns (struct gdbarch *gdbarch, struct ui_out *uiout,
 	    struct disassemble_info * di,
@@ -106,6 +146,12 @@ dump_insns (struct gdbarch *gdbarch, struct ui_out *uiout,
   int offset;
   int line;
   struct cleanup *ui_out_chain;
+  struct lst *blocks;
+  struct symtab_and_line sal;
+  const struct block *block;
+
+  if (flags & DISASSEMBLY_HACK) 
+    lst_new (& blocks);
 
   for (pc = low; pc < high;)
     {
@@ -122,6 +168,19 @@ dump_insns (struct gdbarch *gdbarch, struct ui_out *uiout,
 	}
       ui_out_chain = make_cleanup_ui_out_tuple_begin_end (uiout, NULL);
 
+      if (flags & DISASSEMBLY_HACK) {
+	block = block_for_pc (pc);                                                                                                                               
+	if (block && ! lst_has(blocks, block)) {
+	  blocks = lst_add (blocks, block);
+	}
+
+	sal = find_pc_line (pc, 1);                                                                                                                              
+        if (sal.symtab && sal.symtab->filename) {
+	  ui_out_field_string (uiout, "file-name", sal.symtab->filename);
+	  ui_out_field_int (uiout, "file-line", sal.line);
+	}
+      }
+
       if ((flags & DISASSEMBLY_OMIT_PC) == 0)
 	ui_out_text (uiout, pc_prefix (pc));
       ui_out_field_core_addr (uiout, "address", gdbarch, pc);
@@ -182,6 +241,57 @@ dump_insns (struct gdbarch *gdbarch, struct ui_out *uiout,
       do_cleanups (ui_out_chain);
       ui_out_text (uiout, "\n");
     }
+
+  if (flags & DISASSEMBLY_HACK) {
+    struct lst *l;
+    ui_out_end (uiout, ui_out_type_list);
+    ui_out_begin (uiout, ui_out_type_list, "blocks");
+    for (l = blocks; l; l = l->next) {
+      struct dict_iterator iter;
+      void *v;
+      block = (struct block*)l->data;
+      ui_out_begin (uiout, ui_out_type_tuple, NULL);
+      ui_out_field_core_addr (uiout, "begin", gdbarch, block->startaddr);
+      ui_out_field_core_addr (uiout, "end", gdbarch, block->endaddr);
+
+      if (block_inlined_p (block)) {
+	ui_out_field_string (uiout, "inlined", SYMBOL_NATURAL_NAME(BLOCK_FUNCTION(block)));
+      }
+
+      ui_out_begin (uiout, ui_out_type_list, "variables");
+      v = dict_iterator_first(block->dict, &iter);                                                                                               
+      while (v != NULL) {
+	struct symbol *ss = (struct symbol*)v;                                    
+	struct type *typ = SYMBOL_TYPE(ss);
+        struct ui_file *mem = mem_fileopen ();
+        struct cleanup *cleanups = make_cleanup_ui_file_delete (mem);
+
+	c_print_type(typ, SYMBOL_NATURAL_NAME(ss), mem, 1, 4, &type_print_raw_options);
+
+	ui_out_begin (uiout, ui_out_type_tuple, NULL);
+
+	ui_out_field_stream (uiout, "variable", mem);
+
+	ui_file_rewind (mem);
+
+	if (SYMBOL_COMPUTED_OPS(ss) != NULL) {
+	  SYMBOL_COMPUTED_OPS(ss)->describe_location(ss, block->startaddr, mem);
+	  ui_out_field_stream (uiout, "dwarf", mem);
+	}
+        ui_out_end (uiout, ui_out_type_tuple);
+	//ui_file_delete (mem);
+        do_cleanups (cleanups);
+	v = dict_iterator_next(&iter);
+      } 
+      ui_out_end (uiout, ui_out_type_list);
+
+      ui_out_end (uiout, ui_out_type_tuple);
+
+    }
+
+    lst_free (& blocks);
+  }
+
   return num_displayed;
 }
 
diff --git a/gdb/disasm.h b/gdb/disasm.h
index a91211e..ef43994 100644
--- a/gdb/disasm.h
+++ b/gdb/disasm.h
@@ -26,6 +26,7 @@
 #define DISASSEMBLY_OMIT_FNAME	(0x1 << 2)
 #define DISASSEMBLY_FILENAME	(0x1 << 3)
 #define DISASSEMBLY_OMIT_PC	(0x1 << 4)
+#define DISASSEMBLY_HACK   	(0x1 << 5)
 
 struct gdbarch;
 struct ui_out;
diff --git a/gdb/kdump.c b/gdb/kdump.c
new file mode 100644
index 0000000..b7b0ef5
--- /dev/null
+++ b/gdb/kdump.c
@@ -0,0 +1,1867 @@
+/* Core dump and executable file functions below target vector, for GDB.
+
+ Copyright (C) 1986-2015 Free Software Foundation, Inc.
+
+ This file is part of GDB.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 3 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+#include "defs.h"
+#include "arch-utils.h"
+#include <signal.h>
+#include <fcntl.h>
+#ifdef HAVE_SYS_FILE_H
+#include <sys/file.h>		/* needed for F_OK and friends */
+#endif
+#include "frame.h"		/* required by inferior.h */
+
+#include "symtab.h"
+#include "regcache.h"
+#include "memattr.h"
+#include "language.h"
+#include "command.h"
+#include "gdbcmd.h"
+#include "inferior.h"
+#include "infrun.h"
+#include "command.h"
+#include "bfd.h"
+#include "target.h"
+#include "gdbcore.h"
+#include "gdbthread.h"
+#include "regcache.h"
+#include "regset.h"
+#include "symfile.h"
+#include "exec.h"
+#include "readline/readline.h"
+#include "exceptions.h"
+#include "solib.h"
+#include "filenames.h"
+#include "progspace.h"
+#include "objfiles.h"
+#include "gdb_bfd.h"
+#include "completer.h"
+#include "filestuff.h"
+#include "s390-linux-tdep.h"
+#include "kdumpfile.h"
+#include "minsyms.h"
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <unistd.h>
+
+
+#include <dirent.h>
+#include <endian.h>
+
+#ifndef O_LARGEFILE
+#define O_LARGEFILE 0
+#endif
+typedef unsigned long long offset;
+#define NULL_offset 0LL
+#define F_BIG_ENDIAN     1
+#define F_LITTLE_ENDIAN  2
+#define F_UNKN_ENDIAN    4
+
+unsigned long long kt_int_value (void *buff);
+unsigned long long kt_ptr_value (void *buff);
+
+int kt_hlist_head_for_each_node (char *addr, int(*func)(void *,offset), void *data);
+
+#define kt_list_head_for_each(addr,head,lhb, _nxt) for((_nxt = kt_ptr_value(lhb)), kdump_type_alloc((struct kdump_type*)&kt_list_head, _nxt, 0, lhb);\
+	(_nxt = kt_ptr_value(lhb)) != head; \
+	kdump_type_alloc((struct kdump_type*)&kt_list_head, _nxt, 0, lhb))
+
+static struct target_ops core_ops;
+
+static kdump_ctx *dump_ctx = NULL;
+
+struct gdbarch *kdump_gdbarch = NULL;
+
+struct target_ops *kdump_target = NULL;
+
+static void init_core_ops (void);
+
+void _initialize_kdump (void);
+
+static void core_close (struct target_ops *self);
+
+typedef unsigned long long offset;
+
+#define KDUMP_TYPE const char *_name; int _size; int _offset; struct type *_origtype
+#define GET_GDB_TYPE(typ) types. typ ._origtype
+#define GET_TYPE_SIZE(typ) (TYPE_LENGTH(GET_GDB_TYPE(typ)))
+#define MEMBER_OFFSET(type,member) types. type. member
+#define KDUMP_TYPE_ALLOC(type) kdump_type_alloc(GET_GDB_TYPE(type))
+#define KDUMP_TYPE_GET(type,off,where) kdump_type_get(GET_GDB_TYPE(type), off, 0, where)
+#define KDUMP_TYPE_FREE(where) free(where)
+#define SYMBOL(var,name) do { var = lookup_symbol(name, NULL, VAR_DOMAIN, NULL); if (! var) { fprintf(stderr, "Cannot lookup_symbol(" name ")\n"); goto error; } } while(0)
+#define OFFSET(x) (types.offsets. x)
+
+#define MAXSYMNAME 256
+
+#define GET_REGISTER_OFFSET(reg) (MEMBER_OFFSET(user_regs_struct,reg)/GET_TYPE_SIZE(_voidp))
+#define GET_REGISTER_OFFSET_pt(reg) (MEMBER_OFFSET(pt_regs,reg)/GET_TYPE_SIZE(_voidp))
+
+#define list_for_each(pos, head) \
+	for (pos = kt_ptr_value(head); pos != (head); KDUMP_TYPE_GET(_voidp,pos,&pos)
+
+#define list_head_for_each(head,lhb, _nxt) for((_nxt = kt_ptr_value(lhb)), KDUMP_TYPE_GET(list_head, _nxt, lhb);\
+	(_nxt = kt_ptr_value(lhb)) != head; \
+	KDUMP_TYPE_GET(list_head, _nxt, lhb))
+
+enum x86_64_regs {
+	reg_RAX = 0,
+	reg_RCX = 2,
+	reg_RDX = 1,
+	reg_RBX = 3,
+	reg_RBP = 6,
+	reg_RSI = 4,
+	reg_RDI = 5,
+	reg_RSP = 7,
+	reg_R8 = 8,
+	reg_R9 = 9,
+	reg_R10 = 10,
+	reg_R11 = 11,
+	reg_R12 = 12,
+	reg_R13 = 13,
+	reg_R14 = 14,
+	reg_R15 = 15,
+	reg_RIP = 16,
+	reg_RFLAGS = 49,
+	reg_ES = 50,
+	reg_CS = 51,
+	reg_SS = 52,
+	reg_DS = 53,
+	reg_FS = 54,
+	reg_GS = 55,
+};
+
+int kt_hlist_head_for_each_node (char *addr, int(*func)(void *,offset), void *data);
+
+typedef enum {
+	ARCH_NONE,
+	ARCH_X86_64,
+	ARCH_S390X,
+	ARCH_PPC64LE,
+} t_arch;
+
+struct cpuinfo {
+	struct {
+		offset curr;
+	} rq;
+};
+
+struct {
+	struct {
+		KDUMP_TYPE;
+		offset prev;
+		offset next;
+	} list_head;
+
+	struct {
+		KDUMP_TYPE;
+		offset first;
+	} hlist_head;
+
+	struct {
+		KDUMP_TYPE;
+		offset next;
+	} hlist_node;
+
+	struct {
+		KDUMP_TYPE;
+	} _int;
+
+	struct {
+		KDUMP_TYPE;
+	} _voidp;
+
+	struct {
+		KDUMP_TYPE;
+
+		offset nr;
+		offset pid_chain;
+	} upid;
+
+	struct {
+		KDUMP_TYPE;
+
+		offset pid;
+		offset pids;
+		offset stack;
+		offset tasks;
+		offset thread;
+		offset thread_group;
+		offset state;
+		offset comm;
+	} task_struct;
+
+	struct {
+		KDUMP_TYPE;
+		offset sp;
+	} thread_struct;
+
+	struct {
+		KDUMP_TYPE;
+		offset curr;
+		offset idle;
+	} rq;
+
+	struct {
+		KDUMP_TYPE;
+		offset r15;
+		offset r14;
+		offset r13;
+		offset r12;
+		offset bp;
+		offset bx;
+		offset r11;
+		offset r10;
+		offset r9;
+		offset r8;
+		offset ax;
+		offset cx;
+		offset dx;
+		offset si;
+		offset di;
+		offset orig_ax;
+		offset ip;
+		offset cs;
+		offset flags;
+		offset sp;
+		offset ss;
+		offset fs_base;
+		offset gs_base;
+		offset ds;
+		offset es;
+		offset fs;
+		offset gs;
+	} user_regs_struct;
+
+	struct pt_regs {
+		KDUMP_TYPE;
+		offset r15;
+		offset r14;
+		offset r13;
+		offset r12;
+		offset bp;
+		offset bx;
+		offset r11;
+		offset r10;
+		offset r9;
+		offset r8;
+		offset ax;
+		offset cx;
+		offset dx;
+		offset si;
+		offset di;
+		offset orig_ax;
+		offset ip;
+		offset cs;
+		offset flags;
+		offset sp;
+		offset ss;		
+	} pt_regs;
+
+	struct ppc_pt_regs {
+		KDUMP_TYPE;
+                offset gpr00;
+                offset gpr01;
+                offset gpr02;
+                offset gpr03;
+                offset gpr04;
+                offset gpr05;
+                offset gpr06;
+                offset gpr07;
+                offset gpr08;
+                offset gpr09;
+                offset gpr10;
+                offset gpr11;
+                offset gpr12;
+                offset gpr13;
+                offset gpr14;
+                offset gpr15;
+                offset gpr16;
+                offset gpr17;
+                offset gpr18;
+                offset gpr19;
+                offset gpr20;
+                offset gpr21;
+                offset gpr22;
+                offset gpr23;
+                offset gpr24;
+                offset gpr25;
+                offset gpr26;
+                offset gpr27;
+                offset gpr28;
+                offset gpr29;
+                offset gpr30;
+                offset gpr31;
+                offset nip;
+                offset msr;
+                offset or3;
+                offset ctr;
+                offset lr;
+                offset xer;
+                offset ccr;
+                offset mq;
+                offset dar;
+                offset dsisr;
+                offset rx1;
+                offset rx2;
+                offset rx3;
+                offset rx4;
+                offset rx5;
+                offset rx6;
+                offset rx7;
+	} ppc_pt_regs;
+
+	struct {
+		KDUMP_TYPE;
+		offset list;
+		offset version;
+		offset srcversion;
+		offset name;
+		offset module_core;
+	} module;
+
+
+	int flags;
+	t_arch arch;
+
+	struct {
+		offset percpu_start;
+		offset percpu_end;
+		offset *percpu_offsets;
+	} offsets;
+
+	struct cpuinfo *cpu;
+	int ncpus;
+} types;
+
+struct task_info {
+	offset task_struct;
+	offset sp;
+	offset ip;
+	int pid;
+	int cpu;
+	offset rq;
+};
+
+enum {
+	T_STRUCT = 1,
+	T_BASE,
+	T_REF
+};
+
+static void free_task_info(struct private_thread_info *addr)
+{
+	struct task_info *ti = (struct task_info*)addr;
+	free(ti);
+}
+
+static struct type *my_lookup_struct (const char *name, const struct block *block)
+{
+  struct symbol *sym;
+
+  sym = lookup_symbol (name, block, STRUCT_DOMAIN, 0);
+
+  if (sym == NULL)
+    {
+	return NULL;
+    }
+  if (TYPE_CODE (SYMBOL_TYPE (sym)) != TYPE_CODE_STRUCT)
+    {
+      warning(_("This context has class, union or enum %s, not a struct."), name);
+      return NULL;
+    }
+  return (SYMBOL_TYPE (sym));
+}
+
+
+unsigned long long kt_int_value (void *buff)
+{
+	unsigned long long val;
+
+	if (GET_TYPE_SIZE(_int) == 4) {
+		val = *(int32_t*)buff;
+		if (types.flags & F_BIG_ENDIAN) val = be32toh(val);
+	} else {
+		val = *(int64_t*)buff;
+		if (types.flags & F_BIG_ENDIAN) val = be64toh(val);
+	}
+
+	return val;
+}
+
+unsigned long long kt_ptr_value (void *buff)
+{
+	unsigned long long val;
+	
+	if (GET_TYPE_SIZE(_voidp) == 4) {
+		val = (unsigned long long) *(uint32_t**)buff;
+		if (types.flags & F_BIG_ENDIAN) val = be32toh(val);
+	} else {
+		val = (unsigned long long) *(uint64_t**)buff;
+		if (types.flags & F_BIG_ENDIAN) val = be64toh(val);
+	}
+	return val;
+}
+static offset get_symbol_address(const char *sname);
+static offset get_symbol_address(const char *sname)
+{
+	struct symbol *ss;
+	const struct language_defn *lang;
+	struct bound_minimal_symbol bms;
+	struct value *val;
+	offset off;
+
+	bms = lookup_minimal_symbol(sname, NULL, NULL);
+	if (bms.minsym != NULL) {
+		return ((offset)BMSYMBOL_VALUE_ADDRESS(bms));
+	}
+	ss = lookup_global_symbol(sname, NULL, ALL_DOMAIN);
+	if (! ss) {
+		ss = lookup_static_symbol(sname, ALL_DOMAIN);
+		if (! ss) return NULL_offset ;
+	}
+	lang  = language_def (SYMBOL_LANGUAGE (ss));
+	val = lang->la_read_var_value (ss, NULL);
+	if (! val) {
+		return NULL_offset;
+	}
+
+		off = (offset) value_address(val);
+		return off;
+	if (TYPE_CODE(value_type(val)) == TYPE_CODE_ENUM) {
+		return (offset) value_as_long(val);
+	} else {
+		off = (offset) value_address(val);
+		return off;
+	}
+}
+static offset get_symbol_value(const char *sname);
+static offset get_symbol_value(const char *sname)
+{
+	struct symbol *ss;
+	const struct language_defn *lang;
+	struct value *val;
+	offset off;
+	ss = lookup_global_symbol(sname, NULL, VAR_DOMAIN);
+	if (! ss) {
+		ss = lookup_static_symbol(sname, VAR_DOMAIN);
+		if (! ss) return NULL_offset ;
+	}
+	lang  = language_def (SYMBOL_LANGUAGE (ss));
+	val = lang->la_read_var_value (ss, NULL);
+	if (! val) {
+		return NULL_offset;
+	}
+
+	if (TYPE_CODE(value_type(val)) == TYPE_CODE_ENUM) {
+		return (offset) value_as_long(val);
+	} else {
+		off = (offset) value_address(val);
+		return off;
+	}
+}
+
+/**
+ * Searches the gdb for the type of specified name of the specified kind.
+ *
+ * @param _type on successfull return contains pointer to type
+ * @param size on successfull return contains type size
+ * @param origname the name of type
+ * @param origtype T_STRUCT or T_REF or T_BASE
+ *
+ * @return 0 on success
+ */
+static int kdump_type_init (struct type **_type, int *size, const char *origname, int origtype)
+{
+	struct type *t;
+
+	if (origtype == T_STRUCT)  {
+		t = my_lookup_struct(origname, NULL);
+	} else if (origtype == T_REF) {
+		struct type *dt;
+ 		dt = lookup_typename(current_language, kdump_gdbarch, origname, NULL, 0);
+		if (dt == NULL) {
+			warning(_("Cannot lookup dereferenced type %s\n"), origname);
+			t = NULL;
+		} else {
+			t = lookup_reference_type(dt);
+		}
+	} else
+		t = lookup_typename(current_language, kdump_gdbarch, origname, NULL, 0);
+
+	if (t == NULL) {
+		warning(_("Cannot lookup(%s)\n"), origname);
+		return 1;
+	}
+
+	*_type = t;
+	*size = TYPE_LENGTH(t);
+
+	return 0;
+}
+
+static int kdump_type_member_init (struct type *type, const char *name, offset *poffset)
+{
+	int i;
+	struct field *f;
+	f = TYPE_FIELDS(type);
+	for (i = 0; i < TYPE_NFIELDS(type); i ++) {
+		if (! strcmp(f->name, name)) {
+			*poffset = (f->loc.physaddr >> 3);
+			return 0;
+		}
+		f++;
+	}
+	return -1;
+}
+
+static void *kdump_type_alloc(struct type *type)
+{
+	int allocated = 0;
+	void *buff;
+
+	allocated = 1;
+	buff = malloc(TYPE_LENGTH(type));
+	if (buff == NULL) {
+		warning(_("Cannot allocate memory of %d length\n"), (int)TYPE_LENGTH(type));
+		return NULL;
+	}
+	return buff;
+}
+
+static int kdump_type_get(struct type *type, offset addr, int pos, void *buff)
+{
+	if (target_read_raw_memory(addr + (TYPE_LENGTH(type)*pos), buff, TYPE_LENGTH(type))) {
+		warning(_("Cannot read target memory of %d length\n"), (int)TYPE_LENGTH(type));
+		return 1;
+	}
+	return 0;
+}
+
+int kdump_types_init(int flags);
+int kdump_types_init(int flags)
+{
+	int ret = 1;
+
+	types.flags = flags;
+
+	#define INIT_STRUCT(name) if(kdump_type_init(&types. name ._origtype, &types. name ._size, #name, T_STRUCT)) { fprintf(stderr, "Cannot find struct type \'%s\'", #name); break; }
+	#define INIT_STRUCT_(name) if(kdump_type_init(&types. name ._origtype, &types. name ._size, #name, T_STRUCT)) {  }
+	#define INIT_STRUCT__(name,nname) if(kdump_type_init(&types. nname ._origtype, &types. nname ._size, #name, T_STRUCT)) {  }
+	#define INIT_BASE_TYPE(name) if(kdump_type_init(&types. name ._origtype, &types. name ._size, #name, T_BASE)) { fprintf(stderr, "Cannot base find type \'%s\'", #name); break; }
+	/** initialize base type and supply its name */
+	#define INIT_BASE_TYPE_(name,tname) if(kdump_type_init(&types. tname ._origtype, &types. tname ._size, #name, T_BASE)) { fprintf(stderr, "Cannot base find type \'%s\'", #name); break; }
+	#define INIT_REF_TYPE(name) if(kdump_type_init(&types. name ._origtype, &types. name ._size, #name, T_REF)) { fprintf(stderr, "Cannot ref find type \'%s\'", #name); break; }
+	#define INIT_REF_TYPE_(name,tname) if(kdump_type_init(&types. tname ._origtype, &types. tname ._size, #name, T_REF)) { fprintf(stderr, "Cannot ref find type \'%s\'", #name); break; }
+	#define INIT_STRUCT_MEMBER(sname,mname) if(kdump_type_member_init(types. sname ._origtype, #mname, &types. sname . mname)) { break; }
+
+	/** initialize member with different name than the containing one */
+	#define INIT_STRUCT_MEMBER_(sname,mname,mmname) if(kdump_type_member_init(types. sname ._origtype, #mname, &types. sname . mmname)) { break; }
+
+	/** don't fail if the member is not present */
+	#define INIT_STRUCT_MEMBER__(sname,mname) kdump_type_member_init(types. sname ._origtype, #mname, &types. sname . mname)
+	do {
+		INIT_BASE_TYPE_(int,_int);
+		INIT_REF_TYPE_(void,_voidp);
+
+		INIT_STRUCT(list_head);
+		INIT_STRUCT_MEMBER(list_head,prev);
+		INIT_STRUCT_MEMBER(list_head,next);
+
+		INIT_STRUCT(hlist_head);
+		INIT_STRUCT_MEMBER(hlist_head,first);
+
+		INIT_STRUCT(hlist_node);
+		INIT_STRUCT_MEMBER(hlist_node,next);
+
+		INIT_STRUCT(upid);
+		INIT_STRUCT_MEMBER(upid,nr);
+		INIT_STRUCT_MEMBER(upid,pid_chain);
+
+		INIT_STRUCT(task_struct);
+		INIT_STRUCT_MEMBER(task_struct,pids);
+		INIT_STRUCT_MEMBER(task_struct,stack);
+		INIT_STRUCT_MEMBER(task_struct,tasks);
+		INIT_STRUCT_MEMBER(task_struct,thread);
+		INIT_STRUCT_MEMBER(task_struct,thread_group);
+		INIT_STRUCT_MEMBER(task_struct,pid);
+		INIT_STRUCT_MEMBER(task_struct,state);
+		INIT_STRUCT_MEMBER(task_struct,comm);
+
+		INIT_STRUCT(thread_struct);
+                MEMBER_OFFSET(thread_struct,sp) = 0;
+		INIT_STRUCT_MEMBER__(thread_struct,sp);
+                if (MEMBER_OFFSET(thread_struct,sp) == 0) {
+			INIT_STRUCT_MEMBER_(thread_struct,ksp,sp);
+		}
+
+		INIT_STRUCT(rq);
+
+		INIT_STRUCT_MEMBER(rq,curr);
+		INIT_STRUCT_MEMBER(rq,idle);
+
+		INIT_STRUCT_(user_regs_struct);
+		if (GET_GDB_TYPE(user_regs_struct)) {
+			INIT_STRUCT_MEMBER__(user_regs_struct, r15);
+			INIT_STRUCT_MEMBER__(user_regs_struct, r14);
+			INIT_STRUCT_MEMBER__(user_regs_struct, r13);
+			INIT_STRUCT_MEMBER__(user_regs_struct, r12);
+			INIT_STRUCT_MEMBER__(user_regs_struct, bp);
+			INIT_STRUCT_MEMBER__(user_regs_struct, bx);
+			INIT_STRUCT_MEMBER__(user_regs_struct, r11);
+			INIT_STRUCT_MEMBER__(user_regs_struct, r10);
+			INIT_STRUCT_MEMBER__(user_regs_struct, r9);
+			INIT_STRUCT_MEMBER__(user_regs_struct, r8);
+			INIT_STRUCT_MEMBER__(user_regs_struct, ax);
+			INIT_STRUCT_MEMBER__(user_regs_struct, cx);
+			INIT_STRUCT_MEMBER__(user_regs_struct, dx);
+			INIT_STRUCT_MEMBER__(user_regs_struct, si);
+			INIT_STRUCT_MEMBER__(user_regs_struct, di);
+			INIT_STRUCT_MEMBER__(user_regs_struct, orig_ax);
+			INIT_STRUCT_MEMBER__(user_regs_struct, ip);
+			INIT_STRUCT_MEMBER__(user_regs_struct, cs);
+			INIT_STRUCT_MEMBER__(user_regs_struct, flags);
+			INIT_STRUCT_MEMBER__(user_regs_struct, sp);
+			INIT_STRUCT_MEMBER__(user_regs_struct, ss);
+			INIT_STRUCT_MEMBER__(user_regs_struct, fs_base);
+			INIT_STRUCT_MEMBER__(user_regs_struct, gs_base);
+			INIT_STRUCT_MEMBER__(user_regs_struct, ds);
+			INIT_STRUCT_MEMBER__(user_regs_struct, es);
+			INIT_STRUCT_MEMBER__(user_regs_struct, fs);
+			INIT_STRUCT_MEMBER__(user_regs_struct, gs);
+		}
+
+		INIT_STRUCT(pt_regs);
+		INIT_STRUCT_MEMBER__(pt_regs, r15);
+		INIT_STRUCT_MEMBER__(pt_regs, r14);
+		INIT_STRUCT_MEMBER__(pt_regs, r13);
+		INIT_STRUCT_MEMBER__(pt_regs, r12);
+		INIT_STRUCT_MEMBER__(pt_regs, bp);
+		INIT_STRUCT_MEMBER__(pt_regs, bx);
+		INIT_STRUCT_MEMBER__(pt_regs, r11);
+		INIT_STRUCT_MEMBER__(pt_regs, r10);
+		INIT_STRUCT_MEMBER__(pt_regs, r9);
+		INIT_STRUCT_MEMBER__(pt_regs, r8);
+		INIT_STRUCT_MEMBER__(pt_regs, ax);
+		INIT_STRUCT_MEMBER__(pt_regs, cx);
+		INIT_STRUCT_MEMBER__(pt_regs, dx);
+		INIT_STRUCT_MEMBER__(pt_regs, si);
+		INIT_STRUCT_MEMBER__(pt_regs, di);
+		INIT_STRUCT_MEMBER__(pt_regs, orig_ax);
+		INIT_STRUCT_MEMBER__(pt_regs, ip);
+		INIT_STRUCT_MEMBER__(pt_regs, cs);
+		INIT_STRUCT_MEMBER__(pt_regs, flags);
+		INIT_STRUCT_MEMBER__(pt_regs, sp);
+		INIT_STRUCT_MEMBER__(pt_regs, ss);
+
+		INIT_STRUCT(module);
+		INIT_STRUCT_MEMBER(module, list);
+		INIT_STRUCT_MEMBER(module, version);
+		INIT_STRUCT_MEMBER(module, srcversion);
+		INIT_STRUCT_MEMBER(module, name);
+		INIT_STRUCT_MEMBER(module, module_core);
+
+		INIT_STRUCT__(pt_regs,ppc_pt_regs);
+		if (GET_GDB_TYPE(ppc_pt_regs)) {
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr00);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr01);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr02);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr03);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr04);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr05);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr06);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr07);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr08);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr09);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr10);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr11);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr12);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr13);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr14);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr15);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr16);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr17);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr18);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr19);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr20);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr21);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr22);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr23);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr24);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr25);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr26);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr27);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr28);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr29);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr30);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, gpr31);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, nip);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, msr);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, or3);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, ctr);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, lr);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, xer);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, ccr);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, mq);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, dar);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, dsisr);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, rx1);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, rx2);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, rx3);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, rx4);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, rx5);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, rx6);
+			INIT_STRUCT_MEMBER__(ppc_pt_regs, rx7);
+		}
+		ret = 0;
+	} while(0);
+
+	if (ret) {
+		fprintf(stderr, "Cannot init types\n");
+	}
+
+	return ret;
+}
+
+int kt_hlist_head_for_each_node (char *addr, int(*func)(void *,offset), void *data)
+{
+	char *b = NULL;
+	offset l;
+	int i = 0;
+	static int cnt = 0;
+	static int ccnt = 0;
+	ccnt ++;
+
+	b = KDUMP_TYPE_ALLOC(hlist_node);
+
+	l = kt_ptr_value((char*)addr + (size_t)MEMBER_OFFSET(hlist_head,first));
+	if (l == NULL_offset) return 0;
+	while(l != NULL_offset) {
+		
+		if (KDUMP_TYPE_GET (hlist_node, l, b)) {
+			fprintf(stderr, "Cannot kdump_type_alloc(kt_hlist_node)");
+			KDUMP_TYPE_FREE(b);
+			return -1;
+		}
+		if (func(data, l)) break;
+		l = kt_ptr_value((char*)b + (size_t)MEMBER_OFFSET(hlist_node,next));
+	}
+
+	if (b) free(b);
+	return 0;
+}
+
+static void
+core_close (struct target_ops *self)
+{
+	if (dump_ctx != NULL) {
+		kdump_free(dump_ctx);
+		dump_ctx = NULL;
+	}
+
+	kdump_gdbarch = NULL;
+}
+
+static int init_types(int);
+static int init_types(int flags)
+{
+	struct type *t, *et;
+	int i, nc, r;
+	kdump_reg_t reg;
+
+	nc = kdump_num_cpus(dump_ctx);
+	types.ncpus = kdump_num_cpus(dump_ctx);
+
+	for (i = 0; i < nc; i++) {
+		for (r = 0; ; r++) {
+			if (kdump_read_reg(dump_ctx, i, r, &reg)) break;
+#ifdef _DEBUG
+			printf_filtered("CPU % 2d,REG%02d=%llx\n", i, r, (long long unsigned int)reg);
+#endif
+		}
+	}
+
+	return kdump_types_init(flags);
+}
+
+static offset get_percpu_offset(const char *varname, int ncpu);
+static offset get_percpu_offset(const char *varname, int ncpu)
+{
+	char buff[MAXSYMNAME];
+	char b[sizeof(offset)];
+	offset off = NULL_offset;
+
+	struct bound_minimal_symbol bmsym;
+
+	snprintf(buff, sizeof(buff)-1, "%s", varname);
+
+	do {
+		off = get_symbol_value(buff);
+		if (off >= OFFSET(percpu_start) && off <= OFFSET(percpu_end)) {
+			off = off + OFFSET(percpu_offsets[ncpu]);
+			break;
+		}
+
+		bmsym = lookup_minimal_symbol(buff, NULL, NULL);
+		if (bmsym.minsym) {
+			struct obj_section *os;
+			struct objfile *of;
+			os = MSYMBOL_OBJ_SECTION(bmsym.objfile, bmsym.minsym);
+
+			if (os && os->the_bfd_section && !strcmp(os->the_bfd_section->name, ".data..percpu")) {
+				off = off + OFFSET(percpu_offsets[ncpu]);
+				break;
+			}
+		}
+
+	} while(0);
+
+	return off;
+}
+
+static void init_runqueues(void);
+static void init_runqueues(void)
+{
+	int i;
+	offset r, curr;
+	char *runq;
+
+	runq = KDUMP_TYPE_ALLOC(rq);
+
+	for(i = 0; i < types.ncpus; i++) {
+		r = get_percpu_offset("runqueues", i);
+		if (r == NULL_offset) {
+			error(_("Cannot get pcpu offset of \'runqueues\':%d\n"), i);
+			goto out;
+		}
+		if (KDUMP_TYPE_GET(rq, r, runq)) {
+			error(_("Cannot get runqueue\n"));
+			goto out;
+		}
+		curr = kt_ptr_value(runq + MEMBER_OFFSET(rq,curr));
+
+		types.cpu[i].rq.curr = curr;
+#ifdef _DEBUG
+		printf_filtered("cpu%02d->curr=%llx\n", i, curr);
+#endif
+	}
+out:
+	KDUMP_TYPE_FREE(runq);
+}
+
+/**
+ * Return the index of CPU that runs specifed task, or -1.
+ *
+ */
+static int get_process_cpu(offset task);
+static int get_process_cpu(offset task)
+{
+	int i;
+
+	for(i = 0; i < types.ncpus; i++) {
+		if (types.cpu[i].rq.curr == task) return i;
+	}
+
+	return -1;
+}
+
+static int add_task(offset off_task, int *pid_reserve, char *task);
+static int add_task(offset off_task, int *pid_reserve, char *task)
+{
+	struct symbol *s;
+	char *b = NULL, *init_task = NULL;
+	char _b[16];
+	offset rsp, rip, _rsp;
+	offset tasks;
+	offset stack;
+	offset o_init_task;
+	int state;
+	int i, cpu;
+	int hashsize;
+	struct task_info *task_info;
+
+	struct thread_info *info;
+	int pid;
+	ptid_t tt;
+	struct regcache *rc;
+	long long val;
+
+	b = _b;
+
+
+	state = kt_int_value(task + MEMBER_OFFSET(task_struct,state));
+	pid = kt_int_value(task + MEMBER_OFFSET(task_struct,pid));
+	stack = kt_ptr_value(task + MEMBER_OFFSET(task_struct,stack));
+	_rsp = rsp = kt_ptr_value(task + MEMBER_OFFSET(task_struct,thread) + MEMBER_OFFSET(thread_struct,sp));
+
+	if (pid == 0) {
+		pid = *pid_reserve;
+		*pid_reserve = pid + 1;
+	}
+	task_info = malloc(sizeof(struct task_info));
+	task_info->pid = pid;
+	task_info->cpu = -1;
+
+	if (types.arch == ARCH_S390X) {
+		if (! KDUMP_TYPE_GET(_voidp, rsp+136, b))
+			rip = kt_ptr_value(b);
+		if (KDUMP_TYPE_GET(_voidp, rsp+144, b)) return -3;
+		rsp = kt_ptr_value(b);
+		task_info->sp = rsp;
+		task_info->ip = rip;
+	} else {
+		if (KDUMP_TYPE_GET(_voidp, rsp, b)) return -2;
+		rip = kt_ptr_value(b);
+	}
+#ifdef _DEBUG
+	fprintf(stdout, "TASK %llx,%llx,rsp=%llx,rip=%llx,pid=%d,state=%d,name=%s\n", off_task, stack, rsp, rip, pid, state, task + MEMBER_OFFSET(task_struct,comm));
+#endif
+	if (pid < 0) {
+		free_task_info((struct private_thread_info*)task_info);
+		return 0;
+	}
+
+	task_info->task_struct = off_task;
+
+	tt = ptid_build (1, pid, 0);
+	info = add_thread(tt);
+	info->priv = (struct private_thread_info*)task_info;
+	info->private_dtor = free_task_info;
+
+	inferior_ptid = tt;
+	info->name = strdup(task + MEMBER_OFFSET(task_struct,comm));
+
+	val = 0;
+
+	rc = get_thread_regcache (tt);
+
+	if (types.arch == ARCH_S390X) {
+
+		if (((cpu = get_process_cpu(off_task)) != -1)) {
+#ifdef _DEBUG
+			printf("task %p is running on %d\n", (void*)task_info->task_struct, cpu);
+#endif
+		}
+		/*
+		 * TODO: implement retrieval of register values from lowcore
+		 */
+		val = be64toh(rip);
+		regcache_raw_supply(rc, 1, &val);
+
+		if (! KDUMP_TYPE_GET(_voidp, _rsp+136, b)) regcache_raw_supply(rc, S390_R14_REGNUM, b);
+		if (! KDUMP_TYPE_GET(_voidp, _rsp+128, b)) regcache_raw_supply(rc, S390_R13_REGNUM, b);
+		if (! KDUMP_TYPE_GET(_voidp, _rsp+120, b)) regcache_raw_supply(rc, S390_R12_REGNUM, b);
+		if (! KDUMP_TYPE_GET(_voidp, _rsp+112, b)) regcache_raw_supply(rc, S390_R11_REGNUM, b);
+		if (! KDUMP_TYPE_GET(_voidp, _rsp+104, b)) regcache_raw_supply(rc, S390_R10_REGNUM, b);
+		if (! KDUMP_TYPE_GET(_voidp, _rsp+96, b)) regcache_raw_supply(rc, S390_R9_REGNUM, b);
+		if (! KDUMP_TYPE_GET(_voidp, _rsp+88, b)) regcache_raw_supply(rc, S390_R8_REGNUM, b);
+		if (! KDUMP_TYPE_GET(_voidp, _rsp+80, b)) regcache_raw_supply(rc, S390_R7_REGNUM, b);
+		if (! KDUMP_TYPE_GET(_voidp, _rsp+72, b)) regcache_raw_supply(rc, S390_R6_REGNUM, b);
+		if (! KDUMP_TYPE_GET(_voidp, _rsp+64, b)) regcache_raw_supply(rc, S390_R5_REGNUM, b);
+		if (! KDUMP_TYPE_GET(_voidp, _rsp+56, b)) regcache_raw_supply(rc, S390_R4_REGNUM, b);
+		if (! KDUMP_TYPE_GET(_voidp, _rsp+48, b)) regcache_raw_supply(rc, S390_R3_REGNUM, b);
+		if (! KDUMP_TYPE_GET(_voidp, _rsp+40, b)) regcache_raw_supply(rc, S390_R2_REGNUM, b);
+		if (! KDUMP_TYPE_GET(_voidp, _rsp+32, b)) regcache_raw_supply(rc, S390_R1_REGNUM, b);
+		
+		val = be64toh(rsp);
+		regcache_raw_supply(rc, S390_R15_REGNUM, &val);
+	} else if (types.arch == ARCH_X86_64) {
+		/*
+		 * The task is not running - e.g. crash would show it's stuck in schedule()
+		 * Yet schedule() is not on its stack.
+		 *
+		 */
+		cpu = 0;
+		if (((cpu = get_process_cpu(off_task)) == -1)) {
+			long long regs[64];
+
+			/*
+			 * So we're gonna skip its stackframe
+			 * FIXME: use the size obtained from debuginfo
+			 */
+			rsp += 0x148;
+			target_read_raw_memory(rsp - 0x8 * (1 + 6), (void*)regs, 0x8 * 6);
+
+			regcache_raw_supply(rc, 15, &regs[5]);
+			regcache_raw_supply(rc, 14, &regs[4]);
+			regcache_raw_supply(rc, 13, &regs[3]);
+			regcache_raw_supply(rc, 12, &regs[2]);
+			regcache_raw_supply(rc, 6, &regs[1]);
+			regcache_raw_supply(rc, 3, &regs[0]);
+
+			KDUMP_TYPE_GET(_voidp, rsp, b);
+			rip = kt_ptr_value(b);
+			rsp += 8;
+
+			regcache_raw_supply(rc, 7, &rsp);
+			regcache_raw_supply(rc, 16, &rip);
+
+			task_info->sp = rsp;
+			task_info->ip = rip;
+		} else {
+			kdump_reg_t reg;
+
+			task_info->cpu = cpu;
+#ifdef _DEBUG
+			printf("task %p is running on %d\n", (void*)task_info->task_struct, cpu);
+#endif
+
+#define REG(en,mem) kdump_read_reg(dump_ctx, cpu, GET_REGISTER_OFFSET(mem), &reg); regcache_raw_supply(rc, en, &reg)
+		
+			REG(reg_RSP,sp);
+			task_info->sp = reg;
+			REG(reg_RIP,ip);
+			printf ("task %p cpu %02d rip = %p\n", (void*)task_info->task_struct, cpu, reg);
+			task_info->ip = reg;
+			REG(reg_RAX,ax);
+			REG(reg_RCX,cx);
+			REG(reg_RDX,dx);
+			REG(reg_RBX,bx);
+			REG(reg_RBP,bp);
+			REG(reg_RSI,si);
+			REG(reg_RDI,di);
+			REG(reg_R8,r8);
+			REG(reg_R9,r9);
+			REG(reg_R10,r10);
+			REG(reg_R11,r11);
+			REG(reg_R12,r12);
+			REG(reg_R13,r13);
+			REG(reg_R14,r14);
+			REG(reg_R15,r15);
+			REG(reg_RFLAGS,flags);
+			REG(reg_ES,es);
+			REG(reg_CS,cs);
+			REG(reg_SS,ss);
+			REG(reg_DS,ds);
+			REG(reg_FS,fs);
+			REG(reg_GS,gs);
+#undef REG
+		}
+	} else if (types.arch == ARCH_PPC64LE) {
+		if (((cpu = get_process_cpu(off_task)) == -1)) {
+			val = 789;
+			regcache_raw_supply(rc, 1, &val);
+			val = 456;
+			regcache_raw_supply(rc, 64, &val);
+			for (i = 0; i < 169; i ++) {
+				val = htobe64(i);
+				regcache_raw_supply(rc, i, &val);
+			}
+		} else {
+			kdump_reg_t reg;
+			task_info->cpu = cpu;
+			long long regs[64];
+			for (i = 0; i < 32; i ++) {
+				kdump_read_reg(dump_ctx, cpu, i, &reg);
+				val = htobe64(reg);
+				regcache_raw_supply(rc, i, &val);
+			//	kdump_read_reg(dump_ctx, cpu, 32, &reg); regcache_raw_supply(rc, 32, &val);
+			//	kdump_read_reg(dump_ctx, cpu, 1, &reg); regcache_raw_supply(rc, 1, &val);
+			}
+			for (i = 32; i < 49; i ++) {
+				kdump_read_reg(dump_ctx, cpu, i, &reg);
+				val = htobe64(reg);
+				regcache_raw_supply(rc, i+32, &val);
+			}
+			kdump_read_reg(dump_ctx, cpu, 32, &reg);
+			task_info->ip = reg;
+			kdump_read_reg(dump_ctx, cpu, 1, &reg);
+			task_info->sp = reg;
+			for (i = 0; i < 129; i ++) {
+				val = i;
+			//	regcache_raw_supply(rc, i, &val);
+			}
+		}
+	}
+
+	return 0;
+}
+
+static int init_values(void);
+static int init_values(void)
+{
+	struct symbol *s;
+	char *b = NULL, *init_task = NULL, *task = NULL;
+	offset off, off_task, rsp, rip, _rsp;
+	offset tasks;
+	offset stack;
+	offset o_init_task;
+	int state;
+	int i, cpu;
+	int hashsize;
+	struct inferior *in;
+	int cnt = 0;
+	int pid_reserve;
+	struct task_info *task_info;
+
+	s = NULL;
+	
+	b = KDUMP_TYPE_ALLOC(_voidp);
+	if (!b) goto error;
+
+	OFFSET(percpu_start) = get_symbol_value("__per_cpu_start");
+	OFFSET(percpu_end) = get_symbol_value("__per_cpu_end");
+	off = get_symbol_value("__per_cpu_offset");
+	types.cpu = malloc(sizeof(struct cpuinfo)*types.ncpus);
+	OFFSET(percpu_offsets) = malloc(sizeof(offset)*types.ncpus);
+	memset(OFFSET(percpu_offsets), 0, sizeof(offset)*types.ncpus);
+
+	for (i = 0; i < types.ncpus; i++) {
+		if (KDUMP_TYPE_GET(_voidp, off + GET_TYPE_SIZE(_voidp)*i, b)) goto error;
+		OFFSET(percpu_offsets[i]) = kt_ptr_value(b);
+#ifdef _DEBUG
+		printf ("pcpu[%d]=%llx\n", i, OFFSET(percpu_offsets[i]));
+#endif
+	}
+
+	init_runqueues();
+
+	o_init_task = get_symbol_value("init_task");
+	if (! o_init_task) {
+		warning(_("Cannot find init_task\n"));
+		return -1;
+	}
+	init_task = KDUMP_TYPE_ALLOC(task_struct);
+	if (!init_task)
+		goto error;
+	task = KDUMP_TYPE_ALLOC(task_struct);
+	if (!task) goto error;
+	if (KDUMP_TYPE_GET(task_struct, o_init_task, init_task))
+		goto error;
+	tasks = kt_ptr_value(init_task + MEMBER_OFFSET(task_struct,tasks));
+
+	i = 0;
+	off = 0;
+	pid_reserve = 50000;
+
+	print_thread_events = 0;
+	in = current_inferior();
+	inferior_appeared (in, 1);
+
+	list_head_for_each(tasks, init_task + MEMBER_OFFSET(task_struct,tasks), off) {
+		
+		struct thread_info *info;
+		int pid;
+		ptid_t tt;
+		struct regcache *rc;
+		long long val;
+		offset main_tasks, mt;
+		
+
+		//fprintf(stderr, __FILE__":%d: ok\n", __LINE__);
+		off_task = off - MEMBER_OFFSET(task_struct,tasks);
+		if (KDUMP_TYPE_GET(task_struct, off_task, task)) continue;
+
+		main_tasks = off_task;//kt_ptr_value(task + MEMBER_OFFSET(task_struct,thread_group));
+
+		do {
+		//list_head_for_each(main_tasks, task + MEMBER_OFFSET(task_struct,thread_group), mt) {
+
+			//off_task = mt - MEMBER_OFFSET(task_struct,thread_group);
+			if (KDUMP_TYPE_GET(task_struct, off_task, task))  {
+				continue;
+			}
+
+			if (add_task(off_task, &pid_reserve, task)) {
+
+			} else {
+				
+				printf_unfiltered(_("Loaded processes: %d\r"), ++cnt);
+			}
+			off_task = kt_ptr_value(task + MEMBER_OFFSET(task_struct, thread_group)) - MEMBER_OFFSET(task_struct, thread_group);
+			if (off_task == main_tasks) break;
+
+		} while (1);
+	}
+
+	if (b) free(b);
+	if (init_task) free(init_task);
+
+	printf_unfiltered(_("Loaded processes: %d\n"), cnt);
+	return 0;
+error:
+	if (b) free(b);
+	if (init_task) free(init_task);
+
+	return 1;
+}
+
+struct t_kdump_arch {
+	char *kdident;
+	char *gdbident;
+	int flags;
+	t_arch arch;
+	int (*init_func)(const struct t_kdump_arch *, int *);
+} ;
+
+static int kdump_ppc64_init(const struct t_kdump_arch *a, int *flags)
+{
+	*flags = F_BIG_ENDIAN;
+	return 0;
+}
+
+static const struct t_kdump_arch archlist[] = {
+	{"x86_64", "i386:x86-64",      F_LITTLE_ENDIAN, ARCH_X86_64,  NULL},
+	{"s390x",  "s390:64-bit",      F_BIG_ENDIAN,    ARCH_S390X,   NULL},
+	{"ppc64",  "powerpc:common64", F_UNKN_ENDIAN,   ARCH_PPC64LE, kdump_ppc64_init},
+	{NULL}
+};
+
+
+static int kdump_do_init(void);
+static int kdump_do_init(void)
+{
+	const bfd_arch_info_type *ait;
+	struct gdbarch_info gai;
+	struct gdbarch *garch;
+	struct inferior *inf;
+	const char *archname;
+	const struct t_kdump_arch *a;
+	int flags, ret;
+	ptid_t tt;
+
+	archname = kdump_arch_name(dump_ctx);
+	if (! archname) {
+		error(_("The architecture could not be identified"));
+		return -1;
+	}
+	for (a = archlist; a->kdident && strcmp(a->kdident, archname); a++);
+
+	if (! a->kdident) {
+		error(_("Architecture %s is not yet supported by gdb-kdump\n"), archname);
+		return -2;
+	}
+
+	gdbarch_info_init(&gai);
+	ait = bfd_scan_arch (a->gdbident);
+	if (! ait) {
+		error(_("Architecture %s not supported in gdb\n"), a->gdbident);
+		return -3;
+	}
+	gai.bfd_arch_info = ait;
+	garch = gdbarch_find_by_info(gai);
+	kdump_gdbarch = garch;
+#ifdef _DEBUG
+	fprintf(stderr, "arch=%s,ait=%p,garch=%p\n", selected_architecture_name(), ait, garch);
+#endif
+	flags = a->flags;
+	if (a->init_func) {
+		if ((ret = a->init_func(a, &flags)) != 0) {
+			error(_("Architecture %s init_func()=%d"), a->kdident, ret);
+			return -5;
+		}
+	}
+	init_thread_list();
+	inf = current_inferior();
+
+	types.arch = a->arch;
+	
+	if (init_types(flags)) {
+		warning(_("kdump: Cannot init types!\n"));
+	}
+	if (init_values()) {
+		warning(_("kdump: Cannot init values!\n"));
+	}
+	set_executing(minus_one_ptid,0);
+	reinit_frame_cache();
+
+	return 0;
+}
+
+static kdump_status kdump_get_symbol_val_cb(kdump_ctx *ctx, const char *name, kdump_addr_t *val)
+{
+	*val = (kdump_addr_t) get_symbol_address(name);
+	return kdump_ok;
+}
+
+static void
+kdump_open (const char *arg, int from_tty)
+{
+	const char *p;
+	int siggy;
+	struct cleanup *old_chain;
+	char *temp;
+	bfd *temp_bfd;
+	int scratch_chan;
+	int flags;
+	volatile struct gdb_exception except;
+	char *filename;
+	int fd;
+
+	target_preopen (from_tty);
+	if (!arg) {
+		if (core_bfd)
+			error (_("No kdump file specified.  (Use `detach' "
+				"to stop debugging a core file.)"));
+		else
+			error (_("No kdump file specified."));
+	}
+
+	filename = tilde_expand (arg);
+	if (!IS_ABSOLUTE_PATH (filename))
+	{
+		temp = concat (current_directory, "/", filename, (char *) NULL);
+		xfree (filename);
+		filename = temp;
+	}
+	if ((fd = open(filename, O_RDONLY)) == -1) {
+		error(_("\"%s\" cannot be opened: %s\n"), filename, strerror(errno));
+		return;
+	}
+
+	dump_ctx = kdump_init();
+	if (!dump_ctx) {
+		error(_("kdump_init() failed, \"%s\" cannot be opened as kdump\n"), filename);
+		return;
+	}
+
+	kdump_cb_get_symbol_val(dump_ctx, kdump_get_symbol_val_cb);
+
+	if (kdump_set_fd(dump_ctx, fd) != kdump_ok) {
+		error(_("\"%s\" cannot be opened as kdump\n"), filename);
+		return;
+	}
+
+	if (kdump_vtop_init(dump_ctx) != kdump_ok) {
+		error(_("Cannot kdump_vtop_init(%s)\n"), kdump_err_str(dump_ctx));
+		return;
+	}
+
+	old_chain = make_cleanup (xfree, filename);
+
+	flags = O_BINARY | O_LARGEFILE;
+	if (write_files)
+		flags |= O_RDWR;
+	else
+		flags |= O_RDONLY;
+	scratch_chan = gdb_open_cloexec (filename, flags, 0);
+	if (scratch_chan < 0)
+		perror_with_name (filename);
+
+	push_target (&core_ops);
+	
+	if (kdump_do_init()) {
+		error(_("Cannot initialize kdump"));
+	}
+
+	return;
+}
+
+static void
+core_detach (struct target_ops *ops, const char *args, int from_tty)
+{
+	if (args)
+		error (_("Too many arguments"));
+	unpush_target (ops);
+	reinit_frame_cache ();
+	if (from_tty)
+		printf_filtered (_("No core file now.\n"));
+}
+
+static kdump_paddr_t transform_memory(kdump_paddr_t addr);
+static kdump_paddr_t transform_memory(kdump_paddr_t addr)
+{
+	kdump_paddr_t out;
+	if (kdump_ok == kdump_vtop(dump_ctx, addr, &out)) return out;
+	return addr;
+}
+
+static enum target_xfer_status
+kdump_xfer_partial (struct target_ops *ops, enum target_object object,
+			 const char *annex, gdb_byte *readbuf,
+			 const gdb_byte *writebuf, ULONGEST offset,
+			 ULONGEST len, ULONGEST *xfered_len)
+{
+	ULONGEST i;
+	size_t r;
+	if (dump_ctx == NULL) {
+		error(_("dump_ctx == NULL\n"));
+	}
+	switch (object)
+	{
+		case TARGET_OBJECT_MEMORY:
+			offset = transform_memory((kdump_paddr_t)offset);
+			r = kdump_read(dump_ctx, (kdump_paddr_t)offset, (unsigned char*)readbuf, (size_t)len, KDUMP_PHYSADDR);
+			if (r != len) {
+				error(_("Cannot read %lu bytes from %lx (%lld)!"), (size_t)len, (long unsigned int)offset, (long long)r);
+			} else
+				*xfered_len = len;
+
+			return TARGET_XFER_OK;
+
+		default:
+			return ops->beneath->to_xfer_partial (ops->beneath, object,
+				annex, readbuf,
+				writebuf, offset, len,
+				xfered_len);
+	}
+}
+
+static int ignore (struct target_ops *ops, struct gdbarch *gdbarch, struct bp_target_info *bp_tgt)
+{
+	return 0;
+}
+
+static int
+core_thread_alive (struct target_ops *ops, ptid_t ptid)
+{
+	return 1;
+}
+
+static const struct target_desc *
+core_read_description (struct target_ops *target)
+{
+	if (kdump_gdbarch && gdbarch_core_read_description_p (kdump_gdbarch))
+	{
+		const struct target_desc *result;
+
+		result = gdbarch_core_read_description (kdump_gdbarch, target, core_bfd);
+		if (result != NULL) return result;
+	}
+
+	return target->beneath->to_read_description (target->beneath);
+}
+
+static int core_has_memory (struct target_ops *ops)
+{
+	return 1;
+}
+
+static int core_has_stack (struct target_ops *ops)
+{
+	return 1;
+}
+
+static int core_has_registers (struct target_ops *ops)
+{
+	return 1;
+}
+
+#ifdef _DEBUG
+void kdumptest_file_command (char *filename, int from_tty);
+void kdumptest_file_command (char *filename, int from_tty)
+{
+	const char *sname = "default_llseek";
+	struct symbol *ss;
+	const struct language_defn *lang;
+	struct value *val;
+	struct objfile *obj;
+	struct symtab_and_line sal;
+	CORE_ADDR addr;
+	ss = lookup_global_symbol(sname, NULL, FUNCTIONS_DOMAIN);
+	if (! ss) {
+		return;
+	}
+	lang  = language_def (SYMBOL_LANGUAGE (ss));
+	val = lang->la_read_var_value (ss, NULL);
+	if (! val) {
+		return;
+	}
+	addr = value_address(val);
+	printf("symbol = %llx\n", (unsigned long long)addr);
+
+	ss = lookup_static_symbol("modules", VAR_DOMAIN);
+	printf("MOD:symbol = %llx\n", (unsigned long long)ss);
+
+	sal = find_pc_line (addr, 0);
+	if (sal.line) {
+		if (sal.objfile) {
+			if (sal.objfile->original_name)
+				printf("original name = %s\n", sal.objfile->original_name);
+			printf("sal.objfile=%p\n", sal.objfile);
+		}
+		if (sal.section) {
+			if (sal.section->objfile) {
+				if (sal.section->objfile->original_name)
+					printf("original name = %s\n", sal.section->objfile->original_name);
+				printf("sal.section->objfile=%p\n", sal.section->objfile);
+			} else {
+				if (sal.section->the_bfd_section) {
+					printf("bfd_section\n");
+				} else
+					printf("nothing\n");
+			}
+		} else {
+			printf("no section\n");
+		}
+		if (sal.symtab && sal.symtab->filename) {
+			printf("symtab->filename=%s\n", sal.symtab->filename);
+		}
+
+		printf ("line=%d,pc=%llx,end=%llx\n", sal.line, (offset)sal.pc, (offset)sal.end);
+	}
+	{
+		struct symbol *ss;
+		const struct language_defn *lang;
+		struct value *val;
+		struct obj_section *os;
+		ss = lookup_global_symbol("runqueues", NULL, VAR_DOMAIN);
+		if (! ss) {
+			return;
+		}
+		printf("sect %d\n", SYMBOL_SECTION(ss));
+		//os = SYMBOL_OBJ_SECTION(ss);
+		if (ss->is_objfile_owned) {
+			printf("symtab=%p\n", ss->owner.symtab);
+			if (ss->owner.symtab->compunit_symtab) {
+				printf ("objfile %p\n", ss->owner.symtab->compunit_symtab->objfile);
+				os = SYMBOL_OBJ_SECTION(ss->owner.symtab->compunit_symtab->objfile, ss);
+				if (os->the_bfd_section) {
+					printf("bfd yes! \'%s\'\n", os->the_bfd_section->name);
+				}
+			}
+		}
+	}
+
+	printf ("percpu(runqueues,1)=%llx\n", get_percpu_offset("runqueues",1));
+
+	printf("sp_regnum=%d\n", gdbarch_sp_regnum(kdump_gdbarch));
+	if (gdbarch_unwind_sp_p (kdump_gdbarch))
+		printf("gdbarch_unwind_sp_p=TRUE\n");
+
+	fflush(stdout);
+	printf("PG_slab=%llx\n", get_symbol_value("PG_slab"));
+	return;
+}
+#endif
+
+void kdump_file_command (char *filename, int from_tty);
+void kdump_file_command (char *filename, int from_tty)
+{
+	dont_repeat ();
+
+	gdb_assert (kdump_target != NULL);
+
+	if (!filename)
+		(kdump_target->to_detach) (kdump_target, filename, from_tty);
+	else
+		(kdump_target->to_open) (filename, from_tty);
+}
+
+/**
+ * The following code is meant to just search the given path
+ * for the modules debuginfo files.
+ */
+struct t_directory {
+	char *name;
+	struct t_directory *parent;
+	struct t_directory *next;
+	struct t_directory *_next;
+};
+
+struct t_node {
+	char *filename;
+	struct t_node *lt;
+	struct t_node *gt;
+	struct t_directory *parent;
+	struct t_node *_next;
+};
+
+static struct t_node *rootnode;
+static struct t_directory rootdir;
+static char rootname[NAME_MAX];
+static struct t_node *nodelist;
+
+static void putname(char *path, struct t_directory *dir)
+{
+	char *v, *c = path;
+	for (v = path; dir; dir = dir->parent) {
+		const char *e = dir->name + strlen(dir->name) - 1;
+		*v++ = '/';
+		while (e >= dir->name)
+			*v++ = *e--;
+	}
+	*v-- = '\0';
+	while (v > path) {
+		char z = *v;
+		*v = *path;
+		*path = z;
+		v --;
+		path ++;
+	}
+}
+
+static void insertnode(struct t_node *node, struct t_node **where)
+{
+	while(* where) {
+		int ret = strcmp(node->filename, (*where)->filename);
+		if (ret < 0) where = & (*where)->lt;
+		else if (ret > 0) where = & (*where)->gt;
+		else break;
+	}
+	* where = node;
+	return;
+}
+
+/**
+ * Finds the file of the given name (only the files', not full path).
+ * If found, it puts its full path in output and returns !NULL .
+ * If not found, it returns NULL.
+ */
+static const char *find_module(const char *name, char *output)
+{
+	struct t_node *nod = rootnode;
+	int ret;
+
+	while(nod && (ret = strcmp(nod->filename, name)) != 0) {
+		if (ret > 0) nod = nod->lt;
+		else if (ret < 0) nod = nod->gt;
+	}
+	if (! nod) return NULL;
+	putname(output, nod->parent);
+	strcat(output, nod->filename);
+	return output;
+}
+
+static void free_module_list(void)
+{
+	struct t_directory *n, *p;
+	struct t_node *no, *po;
+	
+	for (n = rootdir._next; ; n = p) {
+		if (!n) break;
+		p = n->_next;
+		if (n->name) free (n->name);
+		free (n);
+	}
+
+	for (no = nodelist; ; no = po) {
+		if (!no) break;
+		po = no->_next;
+		if (no->filename) free (no->filename);
+		free (no);
+	}
+
+	rootdir._next = NULL;
+	nodelist = NULL;
+}
+
+/**
+ * Init the list of modules - walk through p_path and remember
+ * all the regular files with names ending on p_suffix.
+ */
+static void init_module_list(const char *p_path, const char *p_suffix)
+{
+	char path[NAME_MAX];
+	struct t_directory *di;
+	int suffixlen;
+	DIR *d;
+
+	suffixlen = strlen(p_suffix);
+	rootnode = NULL;
+	nodelist = NULL;
+	di = &rootdir;
+	snprintf(rootname, sizeof(rootname)-1, "%s", p_path);
+	rootdir.name = rootname;
+	rootdir.parent = NULL;
+	rootdir.next = NULL;
+
+	while(di) {
+		struct dirent en, *_en;
+		putname(path, di);
+		d = opendir(path);
+		if (!d) {
+			error(_("Cannot open dir %s!\n"), path);
+			return;
+		}
+		while (! readdir_r(d, &en, &_en) && (_en)) {
+			int type;
+
+			type = en.d_type;
+
+			if (en.d_name[0] == '.') continue;
+			if (type == DT_UNKNOWN) {
+				char npath[NAME_MAX];
+				struct stat st;
+				snprintf(npath, sizeof(npath)-1, "%s/%s", path, en.d_name);
+				if (stat(npath, &st) == 0) {
+					if (S_ISDIR(st.st_mode)) type = DT_DIR;
+					else if (S_ISREG(st.st_mode)) type = DT_REG;
+				}
+			}
+
+			if (type == DT_DIR) {
+				struct t_directory *ndi = malloc(sizeof(struct t_directory));
+				ndi->_next = rootdir._next;
+				rootdir._next = ndi;
+				ndi->next = di->next;
+				ndi->name = strdup(en.d_name);
+				ndi->parent = di;
+				di->next = ndi;
+			} else if (type == DT_REG) {
+				int l = strlen(en.d_name);
+
+				if (l > suffixlen && !strcmp(en.d_name+l-suffixlen, p_suffix)) {
+					struct t_node *nod = malloc(sizeof(struct t_node));
+					nod->_next = nodelist;
+					nodelist = nod;
+					nod->filename = strdup(en.d_name);
+					nod->parent = di;
+					nod->lt = nod->gt = NULL;
+					insertnode(nod, &rootnode);
+				}
+			}
+		}
+		closedir(d);
+		di = di->next;
+	}
+}
+
+static void kdumpmodules_command (char *filename, int from_tty);
+static void kdumpmodules_command (char *filename, int from_tty)
+{
+	offset sym_modules, modules, mod, off_mod, addr;
+	char *module = NULL;
+	char *v = NULL;
+	char modulename[56+9+1];
+	char modulepath[NAME_MAX];
+	int flags = OBJF_USERLOADED | OBJF_SHARED;
+	struct section_addr_info *section_addrs;
+	struct objfile *objf;
+
+	if (dump_ctx == NULL) {
+		error(_("dump_ctx == NULL\n"));
+	}
+	if (! filename || ! strlen(filename)) {
+		error(_("Specify name of directory to load the modules debuginfo from"));
+	}
+	section_addrs = alloc_section_addr_info (1);
+	section_addrs->other[0].name = ".text";
+
+	/* search the path for modules */
+	init_module_list(filename, ".ko.debug");
+	module = KDUMP_TYPE_ALLOC(module);
+	v = KDUMP_TYPE_ALLOC(_voidp);
+	sym_modules = get_symbol_value("modules");
+	if(KDUMP_TYPE_GET(_voidp, sym_modules, v)) goto error;
+	modules = kt_ptr_value(v);
+
+	/* now walk through the module list (of the dumped kernel) and for each module
+	 * try to find it's debuginfo file */
+	list_head_for_each(modules, v, mod) {
+		if (mod == sym_modules) break;
+		off_mod = mod - MEMBER_OFFSET(module,list);
+		if(KDUMP_TYPE_GET(module, off_mod, module)) goto error;
+		snprintf(modulename, sizeof(modulename)-1, "%s.ko.debug", module + MEMBER_OFFSET(module,name));
+		if (! find_module(modulename, modulepath)) {
+			warning(_("Cannot find debuginfo file for module \"%s\""), modulename);
+			continue;
+		}
+		addr = kt_ptr_value(module + MEMBER_OFFSET(module,module_core));
+#ifdef _DEBUG
+		fprintf(stderr, "Going to load module %s at %llx\n", modulepath, addr);
+#endif
+		section_addrs->other[0].addr = addr;
+		section_addrs->num_sections = 1;
+
+		/* load the module' debuginfo (at its module_core address) */
+		objf = symbol_file_add (modulepath, from_tty ? SYMFILE_VERBOSE : 0,
+				  section_addrs, flags);
+		add_target_sections_of_objfile (objf);
+	}
+	
+	error:
+
+	if (v) free(v);
+	if (module) free(module);
+	free_module_list();
+}
+
+static void kdumpps_command(char *fn, int from_tty);
+static void kdumpps_command(char *fn, int from_tty)
+{
+	struct thread_info *tp;
+	struct task_info *task;
+	char cpu[6];
+
+	if (dump_ctx == NULL) {
+		error(_("dump_ctx == NULL\n"));
+	}
+	for (tp = thread_list; tp; tp = tp->next) {
+		task = (struct task_info*)tp->priv;
+		if (!task) continue;
+		if (task->cpu == -1) cpu[0] = '\0';
+		else snprintf(cpu, 5, "% 4d", task->cpu);
+		printf_filtered(_("% 7d %llx %llx %llx %-4s %s\n"), task->pid, task->task_struct, task->ip, task->sp, cpu, tp->name);
+	}
+}
+
+static char *
+kdump_pid_to_str (struct target_ops *ops, ptid_t ptid)
+{
+	static char buf[32];
+	xsnprintf (buf, sizeof buf, "pid %ld", ptid_get_lwp (ptid));
+	return buf;
+}
+
+struct cmd_list_element *kdumplist = NULL;
+static void init_core_ops (void)
+{
+	struct cmd_list_element *c;
+	core_ops.to_shortname = "kdump";
+	core_ops.to_longname = "Compressed kdump file";
+	core_ops.to_doc =
+		"Use a vmcore file as a target.  Specify the filename of the vmcore file.";
+	core_ops.to_open = kdump_open;
+	core_ops.to_close = core_close;
+	core_ops.to_detach = core_detach;
+	core_ops.to_xfer_partial = kdump_xfer_partial;
+	core_ops.to_insert_breakpoint = ignore;
+	core_ops.to_remove_breakpoint = ignore;
+	core_ops.to_thread_alive = core_thread_alive;
+	core_ops.to_read_description = core_read_description;
+	core_ops.to_stratum = process_stratum;
+	core_ops.to_has_memory = core_has_memory;
+	core_ops.to_has_stack = core_has_stack;
+	core_ops.to_has_registers = core_has_registers;
+	core_ops.to_magic = OPS_MAGIC;
+	core_ops.to_pid_to_str = kdump_pid_to_str;
+
+	if (kdump_target)
+		internal_error (__FILE__, __LINE__,
+			_("init_kdump_ops: core target already exists (\"%s\")."),
+			kdump_target->to_longname);
+
+	kdump_target = &core_ops;
+
+	c = add_prefix_cmd ("kdump", no_class, kdumpmodules_command,
+		_("Commands for ease work with kernel dump target"),
+		&kdumplist, "kdump ", 0, &cmdlist);
+
+	c = add_cmd ("modules", class_files, kdumpmodules_command,
+		_("Load modules debuginfo from directory"), &kdumplist);
+	set_cmd_completer (c, filename_completer);
+
+	c = add_cmd ("ps", class_files, kdumpps_command,
+		_("Print ps info"), &kdumplist);
+
+	set_cmd_completer (c, filename_completer);
+
+#ifdef _DEBUG
+	c = add_cmd ("kdumptest", class_files, kdumptest_file_command, _("\
+Test command"), &kdumplist);
+#endif
+}
+
+void
+_initialize_kdump (void)
+{
+	init_core_ops ();
+
+	add_target_with_completer (&core_ops, filename_completer);
+}
diff --git a/gdb/mi/mi-out.c b/gdb/mi/mi-out.c
index 20f59c3..e9693aa 100644
--- a/gdb/mi/mi-out.c
+++ b/gdb/mi/mi-out.c
@@ -166,7 +166,6 @@ mi_begin (struct ui_out *uiout, enum ui_out_type type, int level,
 
   if (data->suppress_output)
     return;
-
   mi_open (uiout, id, type);
 }
 
@@ -403,7 +402,7 @@ mi_out_new (int mi_version)
   int flags = 0;
 
   mi_out_data *data = XNEW (mi_out_data);
-  data->suppress_field_separator = 0;
+  data->suppress_field_separator = 1;
   data->suppress_output = 0;
   data->mi_version = mi_version;
   /* FIXME: This code should be using a ``string_file'' and not the
diff --git a/gdb/python/lib/gdb/kdump/__init__.py b/gdb/python/lib/gdb/kdump/__init__.py
new file mode 100644
index 0000000..f9cfe7b
--- /dev/null
+++ b/gdb/python/lib/gdb/kdump/__init__.py
@@ -0,0 +1,20 @@
+import gdb
+
+def list_head(obj, field, typ=None):
+	if typ == None:
+		typ = obj.type
+	nextaddr = long(obj[field.name]["next"])
+	addr = long(long(obj.address)+(field.bitpos>>3))
+	
+	yield obj
+
+	while not addr == nextaddr:
+		nv = gdb.Value(long(nextaddr-(field.bitpos>>3))).cast(typ.pointer()).dereference()
+		nextaddr = long(nv[field.name]["next"])
+		yield nv
+
+		"""
+import gdb.kdump
+sz=gdb.lookup_symbol("init_task")[0]
+g=gdb.kdump.list_head(sz.value(), sz.value().type["tasks"])
+"""
diff --git a/gdb/python/py-block.c b/gdb/python/py-block.c
index 6c0f5cb..5ae44d6 100644
--- a/gdb/python/py-block.c
+++ b/gdb/python/py-block.c
@@ -103,6 +103,106 @@ blpy_iter (PyObject *self)
   return (PyObject *) block_iter_obj;
 }
 
+
+typedef struct {
+	PyObject_HEAD
+	struct dict_iterator iter;
+	int finished;
+	void *value;
+	PyObject *(*func)(void *);
+} DictIter;
+
+PyObject* DictIter_iter(PyObject *self);
+PyObject* DictIter_iter(PyObject *self)
+{
+  Py_INCREF(self);
+  return self;
+}
+
+static PyObject *obj_to_sym(void *val)
+{
+  PyObject *v = symbol_to_symbol_object ((struct symbol*)val);
+  return v;
+}
+
+PyObject* DictIter_iternext(PyObject *self);
+PyObject* DictIter_iternext(PyObject *self)
+{
+  DictIter *p = (DictIter *)self;
+  PyObject *v;
+  void *n;
+
+  if (p->finished == 1)
+    return NULL;
+
+  v = p->func((struct symbol*)p->value);
+
+  n = dict_iterator_next(&p->iter);
+
+  if (!n)
+    p->finished = 1;
+  else p->value = n;
+
+  return v;
+}
+
+static PyTypeObject DictIterType = {
+PyObject_HEAD_INIT(NULL)
+0,                         /*ob_size*/
+"gdb._DictIter",            /*tp_name*/
+sizeof(DictIter),       /*tp_basicsize*/
+0,                         /*tp_itemsize*/
+0,                         /*tp_dealloc*/
+0,                         /*tp_print*/
+0,                         /*tp_getattr*/
+0,                         /*tp_setattr*/
+0,                         /*tp_compare*/
+0,                         /*tp_repr*/
+0,                         /*tp_as_number*/
+0,                         /*tp_as_sequence*/
+0,                         /*tp_as_mapping*/
+0,                         /*tp_hash */
+0,                         /*tp_call*/
+0,                         /*tp_str*/
+0,                         /*tp_getattro*/
+0,                         /*tp_setattro*/
+0,                         /*tp_as_buffer*/
+Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_ITER,
+"gdb dictionary iterator object.",           /* tp_doc */
+0,  /* tp_traverse */
+0,  /* tp_clear */
+0,  /* tp_richcompare */
+0,  /* tp_weaklistoffset */
+DictIter_iter,  /* tp_iter: __iter__() method */
+DictIter_iternext  /* tp_iternext: next() method */,
+.tp_new = PyType_GenericNew
+};                                                                                                                                                                 
+
+static PyObject *
+blpy_get_symbols(PyObject *self, void *closure)
+{
+  PyObject *tmp;
+  const struct block *block = NULL;
+  struct symbol *s;
+
+  BLPY_REQUIRE_VALID (self, block);
+
+  tmp = (PyObject*)PyObject_New(DictIter, &DictIterType);
+  if (!tmp) return NULL;
+
+  if (!PyObject_Init((PyObject *)tmp, &DictIterType)) {
+    Py_DECREF(tmp);
+    return NULL;
+  }
+
+  s = dict_iterator_first(block->dict, &((DictIter*)tmp)->iter);
+
+  ((DictIter*)tmp)->value = s;
+  ((DictIter*)tmp)->func = obj_to_sym;
+
+  return tmp;
+}
+
 static PyObject *
 blpy_get_start (PyObject *self, void *closure)
 {
@@ -437,6 +537,7 @@ gdbpy_initialize_blocks (void)
   if (PyType_Ready (&block_syms_iterator_object_type) < 0)
     return -1;
 
+  if (PyType_Ready(&DictIterType) < 0)  return -1;
   /* Register an objfile "free" callback so we can properly
      invalidate blocks when an object file is about to be
      deleted.  */
@@ -461,6 +562,7 @@ Return true if this block is valid, false if not." },
 };
 
 static PyGetSetDef block_object_getset[] = {
+  { "symbols", blpy_get_symbols, NULL, "Get symbols", NULL },
   { "start", blpy_get_start, NULL, "Start address of the block.", NULL },
   { "end", blpy_get_end, NULL, "End address of the block.", NULL },
   { "function", blpy_get_function, NULL,
diff --git a/gdb/typeprint.c b/gdb/typeprint.c
index 5a97ace..323deaf 100644
--- a/gdb/typeprint.c
+++ b/gdb/typeprint.c
@@ -52,7 +52,8 @@ const struct type_print_options type_print_raw_options =
   1,				/* print_typedefs */
   NULL,				/* local_typedefs */
   NULL,				/* global_table */
-  NULL				/* global_printers */
+  NULL,				/* global_printers */
+  0				/* print_offsets */
 };
 
 /* The default flags for 'ptype' and 'whatis'.  */
@@ -64,7 +65,8 @@ static struct type_print_options default_ptype_flags =
   1,				/* print_typedefs */
   NULL,				/* local_typedefs */
   NULL,				/* global_table */
-  NULL				/* global_printers */
+  NULL				/* global_printers */,
+  0				/* print_offsets */
 };
 
 \f
@@ -436,6 +438,9 @@ whatis_exp (char *exp, int show)
 		case 'T':
 		  flags.print_typedefs = 1;
 		  break;
+		case 'o':
+		  flags.print_offsets = 1;
+		  break;
 		default:
 		  error (_("unrecognized flag '%c'"), *exp);
 		}
diff --git a/gdb/typeprint.h b/gdb/typeprint.h
index bdff41b..36f5c11 100644
--- a/gdb/typeprint.h
+++ b/gdb/typeprint.h
@@ -46,6 +46,8 @@ struct type_print_options
   /* The list of type printers associated with the global typedef
      table.  This is intentionally opaque.  */
   struct ext_lang_type_printers *global_printers;
+
+  unsigned int print_offsets;
 };
 
 extern const struct type_print_options type_print_raw_options;
-- 
2.7.0

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 3/4] Add SLAB allocator understanding.
  2016-01-31 21:45 Enable gdb to open Linux kernel dumps Ales Novak
  2016-01-31 21:45 ` [PATCH 4/4] Minor cleanups Ales Novak
  2016-01-31 21:45 ` [PATCH 2/4] Add Jeff Mahoney's py-crash patches Ales Novak
@ 2016-01-31 21:45 ` Ales Novak
  2016-02-01 13:21   ` Kieran Bingham
  2016-01-31 21:45 ` [PATCH 1/4] Create new target "kdump" which uses libkdumpfile: https://github.com/ptesarik/libkdumpfile to access contents of compressed kernel dump Ales Novak
  2016-02-01 11:27 ` Enable gdb to open Linux kernel dumps Kieran Bingham
  4 siblings, 1 reply; 31+ messages in thread
From: Ales Novak @ 2016-01-31 21:45 UTC (permalink / raw)
  To: gdb-patches; +Cc: Vlastimil Babka

From: Vlastimil Babka <vbabka@suse.cz>

---
 gdb/kdump.c | 1259 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 1211 insertions(+), 48 deletions(-)

diff --git a/gdb/kdump.c b/gdb/kdump.c
index b7b0ef5..e231559 100644
--- a/gdb/kdump.c
+++ b/gdb/kdump.c
@@ -58,6 +58,7 @@
 #include <sys/types.h>
 #include <sys/stat.h>
 #include <unistd.h>
+#include <hashtab.h>
 
 
 #include <dirent.h>
@@ -73,6 +74,7 @@ typedef unsigned long long offset;
 #define F_UNKN_ENDIAN    4
 
 unsigned long long kt_int_value (void *buff);
+unsigned long long kt_long_value (void *buff);
 unsigned long long kt_ptr_value (void *buff);
 
 int kt_hlist_head_for_each_node (char *addr, int(*func)(void *,offset), void *data);
@@ -97,12 +99,17 @@ static void core_close (struct target_ops *self);
 
 typedef unsigned long long offset;
 
+static int nr_node_ids = 1;
+static int nr_cpu_ids = 1;
+
 #define KDUMP_TYPE const char *_name; int _size; int _offset; struct type *_origtype
 #define GET_GDB_TYPE(typ) types. typ ._origtype
 #define GET_TYPE_SIZE(typ) (TYPE_LENGTH(GET_GDB_TYPE(typ)))
 #define MEMBER_OFFSET(type,member) types. type. member
-#define KDUMP_TYPE_ALLOC(type) kdump_type_alloc(GET_GDB_TYPE(type))
-#define KDUMP_TYPE_GET(type,off,where) kdump_type_get(GET_GDB_TYPE(type), off, 0, where)
+#define KDUMP_TYPE_ALLOC(type) kdump_type_alloc(GET_GDB_TYPE(type), 0)
+#define KDUMP_TYPE_ALLOC_EXTRA(type,extra) kdump_type_alloc(GET_GDB_TYPE(type),extra)
+#define KDUMP_TYPE_GET(type,off,where) kdump_type_get(GET_GDB_TYPE(type), off, 0, where, 0)
+#define KDUMP_TYPE_GET_EXTRA(type,off,where,extra) kdump_type_get(GET_GDB_TYPE(type), off, 0, where, extra)
 #define KDUMP_TYPE_FREE(where) free(where)
 #define SYMBOL(var,name) do { var = lookup_symbol(name, NULL, VAR_DOMAIN, NULL); if (! var) { fprintf(stderr, "Cannot lookup_symbol(" name ")\n"); goto error; } } while(0)
 #define OFFSET(x) (types.offsets. x)
@@ -112,12 +119,12 @@ typedef unsigned long long offset;
 #define GET_REGISTER_OFFSET(reg) (MEMBER_OFFSET(user_regs_struct,reg)/GET_TYPE_SIZE(_voidp))
 #define GET_REGISTER_OFFSET_pt(reg) (MEMBER_OFFSET(pt_regs,reg)/GET_TYPE_SIZE(_voidp))
 
-#define list_for_each(pos, head) \
-	for (pos = kt_ptr_value(head); pos != (head); KDUMP_TYPE_GET(_voidp,pos,&pos)
 
-#define list_head_for_each(head,lhb, _nxt) for((_nxt = kt_ptr_value(lhb)), KDUMP_TYPE_GET(list_head, _nxt, lhb);\
-	(_nxt = kt_ptr_value(lhb)) != head; \
-	KDUMP_TYPE_GET(list_head, _nxt, lhb))
+#define list_head_for_each(head, lhb, _nxt)				      \
+	for(KDUMP_TYPE_GET(list_head, head, lhb), _nxt = kt_ptr_value(lhb),   \
+					KDUMP_TYPE_GET(list_head, _nxt, lhb); \
+		_nxt != head;						      \
+		_nxt = kt_ptr_value(lhb), KDUMP_TYPE_GET(list_head, _nxt, lhb))
 
 enum x86_64_regs {
 	reg_RAX = 0,
@@ -184,6 +191,10 @@ struct {
 
 	struct {
 		KDUMP_TYPE;
+	} _long;
+
+	struct {
+		KDUMP_TYPE;
 	} _voidp;
 
 	struct {
@@ -345,10 +356,54 @@ struct {
 		offset *percpu_offsets;
 	} offsets;
 
+	struct {
+		KDUMP_TYPE;
+		offset flags;
+		offset lru;
+		offset first_page;
+	} page;
+
+	struct {
+		KDUMP_TYPE;
+		offset array;
+		offset name;
+		offset list;
+		offset nodelists;
+		offset num;
+		offset buffer_size;
+	} kmem_cache;
+
+	struct {
+		KDUMP_TYPE;
+		offset slabs_partial;
+		offset slabs_full;
+		offset slabs_free;
+		offset shared;
+		offset alien;
+		offset free_objects;
+	} kmem_list3;
+
+	struct {
+		KDUMP_TYPE;
+		offset avail;
+		offset limit;
+		offset entry;
+	} array_cache;
+
+	struct {
+		KDUMP_TYPE;
+		offset list;
+		offset inuse;
+		offset free;
+		offset s_mem;
+	} slab;
+
 	struct cpuinfo *cpu;
 	int ncpus;
 } types;
 
+unsigned PG_tail, PG_slab;
+
 struct task_info {
 	offset task_struct;
 	offset sp;
@@ -404,6 +459,21 @@ unsigned long long kt_int_value (void *buff)
 	return val;
 }
 
+unsigned long long kt_long_value (void *buff)
+{
+	unsigned long long val;
+
+	if (GET_TYPE_SIZE(_long) == 4) {
+		val = *(int32_t*)buff;
+		if (types.flags & F_BIG_ENDIAN) val = __bswap_32(val);
+	} else {
+		val = *(int64_t*)buff;
+		if (types.flags & F_BIG_ENDIAN) val = __bswap_64(val);
+	}
+
+	return val;
+}
+
 unsigned long long kt_ptr_value (void *buff)
 {
 	unsigned long long val;
@@ -417,6 +487,49 @@ unsigned long long kt_ptr_value (void *buff)
 	}
 	return val;
 }
+
+static unsigned long long kt_ptr_value_off (offset addr)
+{
+	char buf[8];
+	unsigned len = GET_TYPE_SIZE(_voidp);
+
+	if (target_read_raw_memory(addr, (void *)buf, len)) {
+		warning(_("Cannot read target memory addr=%llx length=%u\n"),
+								addr, len);
+		return -1;
+	}
+
+	return kt_ptr_value(buf);
+}
+
+static unsigned long long kt_int_value_off (offset addr)
+{
+	char buf[8];
+	unsigned len = GET_TYPE_SIZE(_int);
+
+	if (target_read_raw_memory(addr, (void *)buf, len)) {
+		warning(_("Cannot read target memory addr=%llx length=%u\n"),
+								addr, len);
+		return -1;
+	}
+
+	return kt_int_value(buf);
+}
+
+char * kt_strndup (offset src, int n);
+char * kt_strndup (offset src, int n)
+{
+	char *dest = NULL;
+	int ret, errno;
+
+	ret = target_read_string(src, &dest, n, &errno);
+
+	if (errno)
+		fprintf(stderr, "target_read_string errno: %d\n", errno);
+
+	return dest;
+}
+
 static offset get_symbol_address(const char *sname);
 static offset get_symbol_address(const char *sname)
 {
@@ -519,35 +632,55 @@ static int kdump_type_member_init (struct type *type, const char *name, offset *
 {
 	int i;
 	struct field *f;
+	int ret;
+	enum type_code tcode;
+	offset off;
+
 	f = TYPE_FIELDS(type);
-	for (i = 0; i < TYPE_NFIELDS(type); i ++) {
-		if (! strcmp(f->name, name)) {
-			*poffset = (f->loc.physaddr >> 3);
+	for (i = 0; i < TYPE_NFIELDS(type); i++, f++) {
+		//printf("fieldname \'%s\'\n", f->name);
+		off = (f->loc.physaddr >> 3);
+		if (!strcmp(f->name, name)) {
+			*poffset = off;
 			return 0;
 		}
-		f++;
+		if (strlen(f->name))
+			continue;
+		tcode = TYPE_CODE(f->type);
+		if (tcode == TYPE_CODE_UNION || tcode == TYPE_CODE_STRUCT) {
+			//printf("recursing into unnamed union/struct\n");
+			ret = kdump_type_member_init(f->type, name, poffset);
+			if (ret != -1) {
+				*poffset += off;
+				return ret;
+			}
+		}
 	}
 	return -1;
 }
 
-static void *kdump_type_alloc(struct type *type)
+static void *kdump_type_alloc(struct type *type, size_t extra_size)
 {
 	int allocated = 0;
 	void *buff;
 
 	allocated = 1;
-	buff = malloc(TYPE_LENGTH(type));
+	buff = malloc(TYPE_LENGTH(type) + extra_size);
 	if (buff == NULL) {
-		warning(_("Cannot allocate memory of %d length\n"), (int)TYPE_LENGTH(type));
+		warning(_("Cannot allocate memory of %u length + %lu extra\n"),
+					TYPE_LENGTH(type), extra_size);
 		return NULL;
 	}
 	return buff;
 }
 
-static int kdump_type_get(struct type *type, offset addr, int pos, void *buff)
+static int kdump_type_get(struct type *type, offset addr, int pos, void *buff,
+							size_t extra_size)
 {
-	if (target_read_raw_memory(addr + (TYPE_LENGTH(type)*pos), buff, TYPE_LENGTH(type))) {
-		warning(_("Cannot read target memory of %d length\n"), (int)TYPE_LENGTH(type));
+	if (target_read_raw_memory(addr + (TYPE_LENGTH(type)*pos), buff,
+					TYPE_LENGTH(type) + extra_size)) {
+		warning(_("Cannot read target memory of %u length + %lu extra\n"),
+					TYPE_LENGTH(type), extra_size);
 		return 1;
 	}
 	return 0;
@@ -568,7 +701,8 @@ int kdump_types_init(int flags)
 	#define INIT_BASE_TYPE_(name,tname) if(kdump_type_init(&types. tname ._origtype, &types. tname ._size, #name, T_BASE)) { fprintf(stderr, "Cannot base find type \'%s\'", #name); break; }
 	#define INIT_REF_TYPE(name) if(kdump_type_init(&types. name ._origtype, &types. name ._size, #name, T_REF)) { fprintf(stderr, "Cannot ref find type \'%s\'", #name); break; }
 	#define INIT_REF_TYPE_(name,tname) if(kdump_type_init(&types. tname ._origtype, &types. tname ._size, #name, T_REF)) { fprintf(stderr, "Cannot ref find type \'%s\'", #name); break; }
-	#define INIT_STRUCT_MEMBER(sname,mname) if(kdump_type_member_init(types. sname ._origtype, #mname, &types. sname . mname)) { break; }
+	#define INIT_STRUCT_MEMBER(sname,mname) if(kdump_type_member_init(types. sname ._origtype, #mname, &types. sname . mname)) \
+		{ fprintf(stderr, "Cannot find struct \'%s\' member \'%s\'", #sname, #mname); break; }
 
 	/** initialize member with different name than the containing one */
 	#define INIT_STRUCT_MEMBER_(sname,mname,mmname) if(kdump_type_member_init(types. sname ._origtype, #mname, &types. sname . mmname)) { break; }
@@ -576,8 +710,9 @@ int kdump_types_init(int flags)
 	/** don't fail if the member is not present */
 	#define INIT_STRUCT_MEMBER__(sname,mname) kdump_type_member_init(types. sname ._origtype, #mname, &types. sname . mname)
 	do {
-		INIT_BASE_TYPE_(int,_int);
-		INIT_REF_TYPE_(void,_voidp);
+		INIT_BASE_TYPE_(int,_int); 
+		INIT_BASE_TYPE_(long,_long);
+		INIT_REF_TYPE_(void,_voidp); 
 
 		INIT_STRUCT(list_head);
 		INIT_STRUCT_MEMBER(list_head,prev);
@@ -728,9 +863,43 @@ int kdump_types_init(int flags)
 			INIT_STRUCT_MEMBER__(ppc_pt_regs, rx6);
 			INIT_STRUCT_MEMBER__(ppc_pt_regs, rx7);
 		}
+		INIT_STRUCT(page);
+		INIT_STRUCT_MEMBER(page, flags);
+		INIT_STRUCT_MEMBER(page, lru);
+		INIT_STRUCT_MEMBER(page, first_page);
+
+		INIT_STRUCT(kmem_cache);
+		INIT_STRUCT_MEMBER(kmem_cache, name);
+		INIT_STRUCT_MEMBER_(kmem_cache, next, list);
+		INIT_STRUCT_MEMBER(kmem_cache, nodelists);
+		INIT_STRUCT_MEMBER(kmem_cache, num);
+		INIT_STRUCT_MEMBER(kmem_cache, array);
+		INIT_STRUCT_MEMBER(kmem_cache, buffer_size);
+
+		INIT_STRUCT(kmem_list3);
+		INIT_STRUCT_MEMBER(kmem_list3, slabs_partial);
+		INIT_STRUCT_MEMBER(kmem_list3, slabs_full);
+		INIT_STRUCT_MEMBER(kmem_list3, slabs_free);
+		INIT_STRUCT_MEMBER(kmem_list3, shared);
+		INIT_STRUCT_MEMBER(kmem_list3, alien);
+		INIT_STRUCT_MEMBER(kmem_list3, free_objects);
+
+		INIT_STRUCT(array_cache);
+		INIT_STRUCT_MEMBER(array_cache, avail);
+		INIT_STRUCT_MEMBER(array_cache, limit);
+		INIT_STRUCT_MEMBER(array_cache, entry);
+
+		INIT_STRUCT(slab);
+		INIT_STRUCT_MEMBER(slab, list);
+		INIT_STRUCT_MEMBER(slab, inuse);
+		INIT_STRUCT_MEMBER(slab, free);
+		INIT_STRUCT_MEMBER(slab, s_mem);
 		ret = 0;
 	} while(0);
 
+	PG_tail = get_symbol_value("PG_tail");
+	PG_slab = get_symbol_value("PG_slab");
+
 	if (ret) {
 		fprintf(stderr, "Cannot init types\n");
 	}
@@ -738,6 +907,148 @@ int kdump_types_init(int flags)
 	return ret;
 }
 
+struct list_iter {
+	offset curr;
+	offset prev;
+	offset head;
+	offset last;
+	offset fast;
+	int cont;
+	int error;
+};
+
+static void list_first_from(struct list_iter *iter, offset o_head)
+{
+	char b_head[GET_TYPE_SIZE(list_head)];
+
+	iter->fast = 0;
+	iter->error = 0;
+	iter->cont = 1;
+
+	if (KDUMP_TYPE_GET(list_head, o_head, b_head)) {
+		warning(_("Could not read list_head %llx in list_first()\n"),
+								o_head);
+		iter->error = 1;
+		iter->cont = 0;
+		return;
+	}
+
+	iter->curr = o_head;
+	iter->last = kt_ptr_value(b_head + MEMBER_OFFSET(list_head, prev));
+
+	iter->head = o_head;
+	iter->prev = iter->last;
+}
+
+static void list_first(struct list_iter *iter, offset o_head)
+{
+	char b_head[GET_TYPE_SIZE(list_head)];
+
+	iter->fast = 0;
+	iter->error = 0;
+	iter->cont = 1;
+
+	if (KDUMP_TYPE_GET(list_head, o_head, b_head)) {
+		warning(_("Could not read list_head %llx in list_first()\n"),
+								o_head);
+		iter->error = 1;
+		iter->cont = 0;
+		return;
+	}
+
+	iter->curr = kt_ptr_value(b_head + MEMBER_OFFSET(list_head, next));
+	iter->last = kt_ptr_value(b_head + MEMBER_OFFSET(list_head, prev));
+
+	/* Empty list */
+	if (iter->curr == o_head) {
+		if (iter->last != o_head) {
+			warning(_("list_head %llx is empty, but prev points to %llx\n"),
+							o_head,	iter->last);
+			iter->error = 1;
+		}
+		iter->cont = 0;
+		return;
+	}
+
+	iter->head = o_head;
+	iter->prev = o_head;
+}
+
+static void list_next(struct list_iter *iter)
+{
+	char b_head[GET_TYPE_SIZE(list_head)];
+	offset o_next, o_prev;
+
+	if (KDUMP_TYPE_GET(list_head, iter->curr, b_head)) {
+		warning(_("Could not read list_head %llx in list_next()\n"),
+								iter->curr);
+		iter->error = 1;
+		iter->cont = 0;
+		return;
+	}
+
+	o_next = kt_ptr_value(b_head + MEMBER_OFFSET(list_head, next));
+	o_prev = kt_ptr_value(b_head + MEMBER_OFFSET(list_head, prev));
+
+	if (o_next == iter->head) {
+		if (iter->curr != iter->last) {
+			warning(_("list item %llx appears to be last, but list_head %llx ->prev points to %llx\n"),
+						iter->curr, iter->head,
+						iter->last);
+			iter->error = 1;
+		}
+		iter->cont = 0;
+		return;
+	}
+
+	if (o_prev != iter->prev) {
+		warning(_("list item %llx ->next is %llx but the latter's ->prev is %llx\n"),
+					iter->prev, iter->curr, o_prev);
+		iter->error = 1;
+		/*
+		 * broken ->prev link means that there might be cycle that
+		 * does not include head; start detecting cycles
+		 */
+		if (!iter->fast)
+			iter->fast = iter->curr;
+	}
+
+	/*
+	 * Are we detecting cycles? If so, advance iter->fast to
+	 * iter->curr->next->next and compare iter->curr to both next's
+	 * (Floyd's Tortoise and Hare algorithm)
+	 *
+	 */
+	if (iter->fast) {
+		int i = 2;
+		while(i--) {
+			/*
+			 *  Simply ignore failure to read fast->next, the next
+			 *  call to list_next() will find out anyway.
+			 */
+			if (KDUMP_TYPE_GET(list_head, iter->fast, b_head))
+				break;
+			iter->fast = kt_ptr_value(
+				b_head + MEMBER_OFFSET(list_head, next));
+			if (iter->curr == iter->fast) {
+				warning(_("list_next() detected cycle, aborting traversal\n"));
+				iter->error = 1;
+				iter->cont = 0;
+				return;
+			}
+		}
+	}
+
+	iter->prev = iter->curr;
+	iter->curr = o_next;
+}
+
+#define list_for_each(iter, o_head) \
+	for (list_first(&(iter), o_head); (iter).cont; list_next(&(iter)))
+
+#define list_for_each_from(iter, o_head) \
+	for (list_first_from(&(iter), o_head); (iter).cont; list_next(&(iter)))
+
 int kt_hlist_head_for_each_node (char *addr, int(*func)(void *,offset), void *data)
 {
 	char *b = NULL;
@@ -995,7 +1306,8 @@ static int add_task(offset off_task, int *pid_reserve, char *task)
 			 * FIXME: use the size obtained from debuginfo
 			 */
 			rsp += 0x148;
-			target_read_raw_memory(rsp - 0x8 * (1 + 6), (void*)regs, 0x8 * 6);
+			if (target_read_raw_memory(rsp - 0x8 * (1 + 6), (void*)regs, 0x8 * 6))
+				warning(_("Could not read regs\n"));
 
 			regcache_raw_supply(rc, 15, &regs[5]);
 			regcache_raw_supply(rc, 14, &regs[4]);
@@ -1026,7 +1338,6 @@ static int add_task(offset off_task, int *pid_reserve, char *task)
 			REG(reg_RSP,sp);
 			task_info->sp = reg;
 			REG(reg_RIP,ip);
-			printf ("task %p cpu %02d rip = %p\n", (void*)task_info->task_struct, cpu, reg);
 			task_info->ip = reg;
 			REG(reg_RAX,ax);
 			REG(reg_RCX,cx);
@@ -1092,13 +1403,860 @@ static int add_task(offset off_task, int *pid_reserve, char *task)
 	return 0;
 }
 
+struct list_head {
+	offset next;
+	offset prev;
+};
+
+struct page {
+	unsigned long flags;
+	struct list_head lru;
+	offset first_page;
+	int valid;
+};
+
+enum slab_type {
+	slab_partial,
+	slab_full,
+	slab_free
+};
+
+static const char *slab_type_names[] = {
+	"partial",
+	"full",
+	"free"
+};
+
+enum ac_type {
+	ac_percpu,
+	ac_shared,
+	ac_alien
+};
+
+static const char *ac_type_names[] = {
+	"percpu",
+	"shared",
+	"alien"
+};
+
+typedef unsigned int kmem_bufctl_t;
+#define BUFCTL_END      (((kmem_bufctl_t)(~0U))-0)
+#define BUFCTL_FREE     (((kmem_bufctl_t)(~0U))-1)
+#define BUFCTL_ACTIVE   (((kmem_bufctl_t)(~0U))-2)
+#define SLAB_LIMIT      (((kmem_bufctl_t)(~0U))-3)
+
+
+struct kmem_cache {
+	offset o_cache;
+	const char *name;
+	unsigned int num;
+	htab_t obj_ac;
+	unsigned int buffer_size;
+	int array_caches_inited;
+	int broken;
+};
+
+struct kmem_slab {
+	offset o_slab;
+	kmem_bufctl_t free;
+	unsigned int inuse;
+	offset s_mem;
+	kmem_bufctl_t *bufctl;
+};
+
+/* Cache of kmem_cache structs indexed by offset */
+static htab_t kmem_cache_cache;
+
+/* List_head of all kmem_caches */
+offset o_slab_caches;
+
+/* Just get the least significant bits of the offset */
+static hashval_t kmem_cache_hash(const void *p)
+{
+	return ((struct kmem_cache*)p)->o_cache;
+}
+
+static int kmem_cache_eq(const void *cache, const void *off)
+{
+	return (((struct kmem_cache*)cache)->o_cache == *(offset *)off);
+}
+
+struct kmem_ac {
+	offset offset;
+	enum ac_type type;
+	/* At which node cache resides (-1 for percpu) */
+	int at_node;
+	/* For which node or cpu the cache is (-1 for shared) */
+	int for_node_cpu;
+};
+
+/* A mapping between object's offset and array_cache */
+struct kmem_obj_ac {
+	offset obj;
+	struct kmem_ac *ac;
+};
+
+static hashval_t kmem_ac_hash(const void *p)
+{
+	return ((struct kmem_obj_ac*)p)->obj;
+}
+
+static int kmem_ac_eq(const void *obj, const void *off)
+{
+	return (((struct kmem_obj_ac*)obj)->obj == *(offset *)off);
+}
+
+//FIXME: support the CONFIG_PAGEFLAGS_EXTENDED variant?
+#define PageTail(page)	(page.flags & 1UL << PG_tail)
+#define PageSlab(page)	(page.flags & 1UL << PG_slab)
+
+//TODO: get this via libkdumpfile somehow?
+#define VMEMMAP_START	0xffffea0000000000UL
+#define PAGE_SHIFT	12
+
+static unsigned long long memmap = VMEMMAP_START;
+
+static offset pfn_to_page_memmap(unsigned long pfn)
+{
+	return memmap + pfn*GET_TYPE_SIZE(page);
+}
+
+//TODO: once the config querying below works, support all variants
+#define pfn_to_page(pfn) pfn_to_page_memmap(pfn)
+
+static kdump_paddr_t transform_memory(kdump_paddr_t addr);
+
+static unsigned long addr_to_pfn(offset addr)
+{
+	kdump_paddr_t pa = transform_memory(addr);
+
+	return pa >> PAGE_SHIFT;
+}
+
+#define virt_to_opage(addr)	pfn_to_page(addr_to_pfn(addr))
+static int check_slab_obj(offset obj);
+static int init_kmem_caches(void);
+static struct page virt_to_head_page(offset addr);
+
+
+//TODO: have some hashtable-based cache as well?
+static struct kmem_slab *
+init_kmem_slab(struct kmem_cache *cachep, offset o_slab)
+{
+	char b_slab[GET_TYPE_SIZE(slab)];
+	struct kmem_slab *slab;
+	offset o_bufctl = o_slab + GET_TYPE_SIZE(slab);
+	size_t bufctl_size = cachep->num * sizeof(kmem_bufctl_t);
+	//FIXME: use target's kmem_bufctl_t typedef, which didn't work in
+	//INIT_BASE_TYPE though
+	size_t bufctl_size_target = cachep->num * GET_TYPE_SIZE(_int);
+	char b_bufctl[bufctl_size_target];
+	int i;
+
+	if (KDUMP_TYPE_GET(slab, o_slab, b_slab)) {
+		warning(_("error reading struct slab %llx of cache %s\n"),
+							o_slab, cachep->name);
+		return NULL;
+	}
+
+	slab = malloc(sizeof(struct kmem_slab));
+
+	slab->o_slab = o_slab;
+	slab->inuse = kt_int_value(b_slab + MEMBER_OFFSET(slab, inuse));
+	slab->free = kt_int_value(b_slab + MEMBER_OFFSET(slab, free));
+	slab->s_mem = kt_ptr_value(b_slab + MEMBER_OFFSET(slab, s_mem));
+
+	slab->bufctl = malloc(bufctl_size);
+	if (target_read_raw_memory(o_bufctl, (void *) b_bufctl,
+				bufctl_size_target)) {
+		warning(_("error reading bufctl %llx of slab %llx of cache %s\n"),
+						o_bufctl, o_slab, cachep->name);
+		for (i = 0; i < cachep->num; i++)
+			slab->bufctl[i] = BUFCTL_END;
+
+		return slab;
+	}
+
+	for (i = 0; i < cachep->num; i++)
+		slab->bufctl[i] = kt_int_value(b_bufctl + i*GET_TYPE_SIZE(_int));
+
+	return slab;
+}
+
+static void free_kmem_slab(struct kmem_slab *slab)
+{
+	free(slab->bufctl);
+	free(slab);
+}
+
+static unsigned int
+check_kmem_slab(struct kmem_cache *cachep, struct kmem_slab *slab,
+							enum slab_type type)
+{
+	unsigned int counted_free = 0;
+	kmem_bufctl_t i;
+	offset o_slab = slab->o_slab;
+	offset o_obj, o_prev_obj = 0;
+	struct page page;
+	offset o_page_cache, o_page_slab;
+
+	i = slab->free;
+	while (i != BUFCTL_END) {
+		counted_free++;
+
+		if (counted_free > cachep->num) {
+			printf("free bufctl cycle detected in slab %llx\n", o_slab);
+			break;
+		}
+		if (i > cachep->num) {
+			printf("bufctl value overflow (%d) in slab %llx\n", i, o_slab);
+			break;
+		}
+
+		i = slab->bufctl[i];
+	}
+
+//	printf("slab inuse=%d cnt_free=%d num=%d\n", slab->inuse, counted_free,
+//								cachep->num);
+
+	if (slab->inuse + counted_free != cachep->num)
+		 printf("slab %llx #objs mismatch: inuse=%d + cnt_free=%d != num=%d\n",
+				o_slab, slab->inuse, counted_free, cachep->num);
+
+	switch (type) {
+	case slab_partial:
+		if (!slab->inuse)
+			printf("slab %llx has zero inuse but is on slabs_partial\n", o_slab);
+		else if (slab->inuse == cachep->num)
+			printf("slab %llx is full (%d) but is on slabs_partial\n", o_slab, slab->inuse);
+		break;
+	case slab_full:
+		if (!slab->inuse)
+			printf("slab %llx has zero inuse but is on slabs_full\n", o_slab);
+		else if (slab->inuse < cachep->num)
+			printf("slab %llx has %d/%d inuse but is on slabs_full\n", o_slab, slab->inuse, cachep->num);
+		break;
+	case slab_free:
+		if (slab->inuse)
+			printf("slab %llx has %d/%d inuse but is on slabs_empty\n", o_slab, slab->inuse, cachep->num);
+		break;
+	default:
+		exit(1);
+	}
+
+	for (i = 0; i < cachep->num; i++) {
+		o_obj = slab->s_mem + i * cachep->buffer_size;
+		if (o_prev_obj >> PAGE_SHIFT == o_obj >> PAGE_SHIFT)
+			continue;
+
+		o_prev_obj = o_obj;
+		page = virt_to_head_page(o_obj);
+		if (!page.valid) {
+			warning(_("slab %llx object %llx could not read struct page\n"),
+					o_slab, o_obj);
+			continue;
+		}
+		if (!PageSlab(page))
+			warning(_("slab %llx object %llx is not on PageSlab page\n"),
+					o_slab, o_obj);
+		o_page_cache = page.lru.next;
+		o_page_slab = page.lru.prev;
+
+		if (o_page_cache != cachep->o_cache)
+			warning(_("cache %llx (%s) object %llx is on page where lru.next points to %llx and not the cache\n"),
+					cachep->o_cache, cachep->name, o_obj,
+					o_page_cache);
+		if (o_page_slab != o_slab)
+			warning(_("slab %llx object %llx is on page where lru.prev points to %llx and not the slab\n"),
+					o_slab, o_obj, o_page_slab);
+	}
+
+	return counted_free;
+}
+
+static unsigned long
+check_kmem_slabs(struct kmem_cache *cachep, offset o_slabs,
+							enum slab_type type)
+{
+	struct list_iter iter;
+	offset o_slab;
+	struct kmem_slab *slab;
+	unsigned long counted_free = 0;
+
+//	printf("checking slab list %llx type %s\n", o_slabs,
+//							slab_type_names[type]);
+
+	list_for_each(iter, o_slabs) {
+		o_slab = iter.curr - MEMBER_OFFSET(slab, list);
+//		printf("found slab: %llx\n", o_slab);
+		slab = init_kmem_slab(cachep, o_slab);
+		if (!slab)
+			continue;
+
+		counted_free += check_kmem_slab(cachep, slab, type);
+		free_kmem_slab(slab);
+	}
+
+	return counted_free;
+}
+
+/* Check that o_obj points to an object on slab of kmem_cache */
+static void check_kmem_obj(struct kmem_cache *cachep, offset o_obj)
+{
+	struct page page;
+	offset o_cache, o_slab;
+	offset obj_base;
+	unsigned int idx;
+	struct kmem_slab *slabp;
+
+	page = virt_to_head_page(o_obj);
+
+	if (!PageSlab(page))
+		warning(_("object %llx is not on PageSlab page\n"), o_obj);
+
+	o_cache = page.lru.next;
+	if (o_cache != cachep->o_cache)
+		warning(_("object %llx is on page that should belong to cache "
+				"%llx (%s), but lru.next points to %llx\n"),
+				o_obj, cachep->o_cache, cachep->name, o_obj);
+
+	o_slab = page.lru.prev;
+	slabp = init_kmem_slab(cachep, o_slab);
+
+	//TODO: check also that slabp is in appropriate lists? could be quite slow...
+	if (!slabp)
+		return;
+
+	//TODO: kernel implementation uses reciprocal_divide, check?
+	idx = (o_obj - slabp->s_mem) / cachep->buffer_size;
+	obj_base = slabp->s_mem + idx * cachep->buffer_size;
+
+	if (obj_base != o_obj)
+		warning(_("pointer %llx should point to beginning of object "
+				"but object's address is %llx\n"), o_obj,
+				obj_base);
+
+	if (idx >= cachep->num)
+		warning(_("object %llx has index %u, but there should be only "
+				"%u objects on slabs of cache %llx"),
+				o_obj, idx, cachep->num, cachep->o_cache);
+}
+
+static void init_kmem_array_cache(struct kmem_cache *cachep,
+		offset o_array_cache, char *b_array_cache, enum ac_type type,
+		int id1, int id2)
+{
+	unsigned int avail, limit, i;
+	char *b_entries;
+	offset o_entries = o_array_cache + MEMBER_OFFSET(array_cache, entry);
+	offset o_obj;
+	void **slot;
+	struct kmem_ac *ac;
+	struct kmem_obj_ac *obj_ac;
+
+	avail = kt_int_value(b_array_cache + MEMBER_OFFSET(array_cache, avail));
+	limit = kt_int_value(b_array_cache + MEMBER_OFFSET(array_cache, limit));
+
+//	printf("found %s[%d,%d] array_cache %llx\n", ac_type_names[type],
+//						id1, id2, o_array_cache);
+//	printf("avail=%u limit=%u entries=%llx\n", avail, limit, o_entries);
+
+	if (avail > limit)
+		printf("array_cache %llx has avail=%d > limit=%d\n",
+						o_array_cache, avail, limit);
+
+	if (!avail)
+		return;
+
+	ac = malloc(sizeof(struct kmem_ac));
+	ac->offset = o_array_cache;
+	ac->type = type;
+	ac->at_node = id1;
+	ac->for_node_cpu = id2;
+
+	b_entries = malloc(avail * GET_TYPE_SIZE(_voidp));
+
+	if (target_read_raw_memory(o_entries, (void *)b_entries,
+					avail *	GET_TYPE_SIZE(_voidp))) {
+		warning(_("could not read entries of array_cache %llx of cache %s\n"),
+						o_array_cache, cachep->name);
+		goto done;
+	}
+
+	for (i = 0; i < avail; i++) {
+		o_obj = kt_ptr_value(b_entries + i * GET_TYPE_SIZE(_voidp));
+		//printf("cached obj: %llx\n", o_obj);
+
+		slot = htab_find_slot_with_hash(cachep->obj_ac, &o_obj, o_obj,
+								INSERT);
+
+		if (*slot)
+			printf("obj %llx already in array_cache!\n", o_obj);
+
+		obj_ac = malloc(sizeof(struct kmem_obj_ac));
+		obj_ac->obj = o_obj;
+		obj_ac->ac = ac;
+
+		*slot = obj_ac;
+
+		check_kmem_obj(cachep, o_obj);
+	}
+
+done:
+	free(b_entries);
+}
+
+/* Array of array_caches, such as kmem_cache.array or *kmem_list3.alien */
+static void init_kmem_array_caches(struct kmem_cache *cachep, char * b_caches,
+					int id1, int nr_ids, enum ac_type type)
+{
+	char b_array_cache[GET_TYPE_SIZE(array_cache)];
+	offset o_array_cache;
+	int id;
+
+	for (id = 0; id < nr_ids; id++, b_caches += GET_TYPE_SIZE(_voidp)) {
+		/*
+		 * A node cannot have alien cache on the same node, but some
+		 * kernels (-xen) apparently don't have the corresponding
+		 * array_cache pointer NULL, so skip it now.
+		 */
+		if (type == ac_alien && id1 == id)
+			continue;
+		o_array_cache = kt_ptr_value(b_caches);
+		if (!o_array_cache)
+			continue;
+		if (KDUMP_TYPE_GET(array_cache, o_array_cache, b_array_cache)) {
+			warning(_("could not read array_cache %llx of cache %s type %s id1=%d id2=%d\n"),
+					o_array_cache, cachep->name,
+					ac_type_names[type], id1,
+					type == ac_shared ? -1 : id);
+			continue;
+		}
+		init_kmem_array_cache(cachep, o_array_cache, b_array_cache,
+			type, id1, type == ac_shared ? -1 : id);
+	}
+}
+
+static void init_kmem_list3_arrays(struct kmem_cache *cachep, offset o_list3,
+								int nid)
+{
+	char b_list3[GET_TYPE_SIZE(kmem_list3)];
+	char *b_shared_caches;
+	offset o_alien_caches;
+	char b_alien_caches[nr_node_ids * GET_TYPE_SIZE(_voidp)];
+
+	if (KDUMP_TYPE_GET(kmem_list3, o_list3, b_list3)) {
+                warning(_("error reading kmem_list3 %llx of nid %d of kmem_cache %llx name %s\n"),
+				o_list3, nid, cachep->o_cache, cachep->name);
+		return;
+	}
+
+	/* This is a single pointer, but treat it as array to reuse code */
+	b_shared_caches = b_list3 + MEMBER_OFFSET(kmem_list3, shared);
+	init_kmem_array_caches(cachep, b_shared_caches, nid, 1, ac_shared);
+
+	o_alien_caches = kt_ptr_value(b_list3 + 
+					MEMBER_OFFSET(kmem_list3, alien));
+
+	//TODO: check that this only happens for single-node systems?
+	if (!o_alien_caches)
+		return;
+
+	if (target_read_raw_memory(o_alien_caches, (void *)b_alien_caches,
+					nr_node_ids * GET_TYPE_SIZE(_voidp))) {
+		warning(_("could not read alien array %llx of kmem_list3 %llx of nid %d of cache %s\n"),
+				o_alien_caches, o_list3, nid, cachep->name);
+	}
+
+
+	init_kmem_array_caches(cachep, b_alien_caches, nid, nr_node_ids,
+								ac_alien);
+}
+
+static void check_kmem_list3_slabs(struct kmem_cache *cachep,
+						offset o_list3,	int nid)
+{
+	char b_list3[GET_TYPE_SIZE(kmem_list3)];
+	offset o_lhb;
+	unsigned long counted_free = 0;
+	unsigned long free_objects;
+
+	if(KDUMP_TYPE_GET(kmem_list3, o_list3, b_list3)) {
+                warning(_("error reading kmem_list3 %llx of nid %d of kmem_cache %llx name %s\n"),
+				o_list3, nid, cachep->o_cache, cachep->name);
+		return;
+	}
+
+	free_objects = kt_long_value(b_list3 + MEMBER_OFFSET(kmem_list3,
+							free_objects));
+
+	o_lhb = o_list3 + MEMBER_OFFSET(kmem_list3, slabs_partial);
+	counted_free += check_kmem_slabs(cachep, o_lhb, slab_partial);
+
+	o_lhb = o_list3 + MEMBER_OFFSET(kmem_list3, slabs_full);
+	counted_free += check_kmem_slabs(cachep, o_lhb, slab_full);
+
+	o_lhb = o_list3 + MEMBER_OFFSET(kmem_list3, slabs_free);
+	counted_free += check_kmem_slabs(cachep, o_lhb, slab_free);
+
+//	printf("free=%lu counted=%lu\n", free_objects, counted_free);
+	if (free_objects != counted_free)
+		warning(_("cache %s should have %lu free objects but we counted %lu\n"),
+				cachep->name, free_objects, counted_free);
+}
+
+static struct kmem_cache *init_kmem_cache(offset o_cache)
+{
+	struct kmem_cache *cache;
+	char b_cache[GET_TYPE_SIZE(kmem_cache)];
+	offset o_cache_name;
+	void **slot;
+
+	if (!kmem_cache_cache)
+		init_kmem_caches();
+
+	slot = htab_find_slot_with_hash(kmem_cache_cache, &o_cache, o_cache,
+								INSERT);
+	if (*slot) {
+		cache = (struct kmem_cache*) *slot;
+//		printf("kmem_cache %s found in hashtab!\n", cache->name);
+		return cache;
+	}
+
+//	printf("kmem_cache %llx not found in hashtab, inserting\n", o_cache);
+
+	cache = malloc(sizeof(struct kmem_cache));
+	cache->o_cache = o_cache;
+
+	if (KDUMP_TYPE_GET(kmem_cache, o_cache, b_cache)) {
+		warning(_("error reading contents of kmem_cache at %llx\n"),
+								o_cache);
+		cache->broken = 1;
+		cache->name = "(broken)";
+		goto done;
+	}
+
+	cache->num = kt_int_value(b_cache + MEMBER_OFFSET(kmem_cache, num));
+	cache->buffer_size = kt_int_value(b_cache + MEMBER_OFFSET(kmem_cache,
+								buffer_size));
+	cache->array_caches_inited = 0;
+
+	o_cache_name = kt_ptr_value(b_cache + MEMBER_OFFSET(kmem_cache,name));
+	if (!o_cache_name) {
+		fprintf(stderr, "cache name pointer NULL\n");
+		cache->name = "(null)";
+	}
+
+	cache->name = kt_strndup(o_cache_name, 128);
+	cache->broken = 0;
+//	printf("cache name is: %s\n", cache->name);
+
+done:
+	*slot = cache;
+	return cache;
+}
+
+static void init_kmem_cache_arrays(struct kmem_cache *cache)
+{
+	char b_cache[GET_TYPE_SIZE(kmem_cache)];
+	char *b_nodelists, *b_array_caches;
+	offset o_nodelist, o_array_cache;
+	char *nodelist, *array_cache;
+	int node;
+
+	if (cache->array_caches_inited || cache->broken)
+		return;
+
+	if (KDUMP_TYPE_GET(kmem_cache, cache->o_cache, b_cache)) {
+		warning(_("error reading contents of kmem_cache at %llx\n"),
+							cache->o_cache);
+		return;
+	}
+
+
+	cache->obj_ac = htab_create_alloc(64, kmem_ac_hash, kmem_ac_eq,
+						NULL, xcalloc, xfree);
+
+	b_nodelists = b_cache + MEMBER_OFFSET(kmem_cache, nodelists);
+	for (node = 0; node < nr_node_ids;
+			node++, b_nodelists += GET_TYPE_SIZE(_voidp)) {
+		o_nodelist = kt_ptr_value(b_nodelists);
+		if (!o_nodelist)
+			continue;
+//		printf("found nodelist[%d] %llx\n", node, o_nodelist);
+		init_kmem_list3_arrays(cache, o_nodelist, node);
+	}
+
+	b_array_caches = b_cache + MEMBER_OFFSET(kmem_cache, array);
+	init_kmem_array_caches(cache, b_array_caches, -1, nr_cpu_ids,
+								ac_percpu);
+
+	cache->array_caches_inited = 1;
+}
+
+static void check_kmem_cache(struct kmem_cache *cache)
+{
+	char b_cache[GET_TYPE_SIZE(kmem_cache)];
+	char *b_nodelists, *b_array_caches;
+	offset o_nodelist, o_array_cache;
+	char *nodelist, *array_cache;
+	int node;
+
+	init_kmem_cache_arrays(cache);
+
+	if (KDUMP_TYPE_GET(kmem_cache, cache->o_cache, b_cache)) {
+		warning(_("error reading contents of kmem_cache at %llx\n"),
+							cache->o_cache);
+		return;
+	}
+
+	b_nodelists = b_cache + MEMBER_OFFSET(kmem_cache, nodelists);
+	for (node = 0; node < nr_node_ids;
+			node++, b_nodelists += GET_TYPE_SIZE(_voidp)) {
+		o_nodelist = kt_ptr_value(b_nodelists);
+		if (!o_nodelist)
+			continue;
+//		printf("found nodelist[%d] %llx\n", node, o_nodelist);
+		check_kmem_list3_slabs(cache, o_nodelist, node);
+	}
+}
+
+static int init_kmem_caches(void)
+{
+	offset o_kmem_cache;
+	struct list_iter iter;
+	offset o_nr_node_ids, o_nr_cpu_ids;
+
+	kmem_cache_cache = htab_create_alloc(64, kmem_cache_hash,
+					kmem_cache_eq, NULL, xcalloc, xfree);
+
+	o_slab_caches = get_symbol_value("slab_caches");
+	if (! o_slab_caches) {
+		o_slab_caches = get_symbol_value("cache_chain");
+		if (!o_slab_caches) {
+			warning(_("Cannot find slab_caches\n"));
+			return -1;
+		}
+	}
+	printf("slab_caches: %llx\n", o_slab_caches);
+
+	o_nr_cpu_ids = get_symbol_value("nr_cpu_ids");
+	if (! o_nr_cpu_ids) {
+		warning(_("nr_cpu_ids not found, assuming 1 for !SMP"));
+	} else {
+		printf("o_nr_cpu_ids = %llx\n", o_nr_cpu_ids);
+		nr_cpu_ids = kt_int_value_off(o_nr_cpu_ids);
+		printf("nr_cpu_ids = %d\n", nr_cpu_ids);
+	}
+
+	o_nr_node_ids = get_symbol_value("nr_node_ids");
+	if (! o_nr_node_ids) {
+		warning(_("nr_node_ids not found, assuming 1 for !NUMA"));
+	} else {
+		printf("o_nr_node_ids = %llx\n", o_nr_node_ids);
+		nr_node_ids = kt_int_value_off(o_nr_node_ids);
+		printf("nr_node_ids = %d\n", nr_node_ids);
+	}
+
+	list_for_each(iter, o_slab_caches) {
+		o_kmem_cache = iter.curr - MEMBER_OFFSET(kmem_cache,list);
+//		printf("found kmem cache: %llx\n", o_kmem_cache);
+
+		init_kmem_cache(o_kmem_cache);
+	}
+
+	return 0;
+}
+
+static void check_kmem_caches(void)
+{
+	offset o_lhb, o_kmem_cache;
+	struct list_iter iter;
+	struct kmem_cache *cache;
+
+	if (!kmem_cache_cache)
+		init_kmem_caches();
+
+	list_for_each(iter, o_slab_caches) {
+		o_kmem_cache = iter.curr - MEMBER_OFFSET(kmem_cache,list);
+
+		cache = init_kmem_cache(o_kmem_cache);
+		printf("checking kmem cache %llx name %s\n", o_kmem_cache,
+				cache->name);
+		if (cache->broken) {
+			printf("cache is too broken, skipping");
+			continue;
+		}
+		check_kmem_cache(cache);
+	}
+}
+
+
+
+
+static struct page read_page(offset o_page)
+{
+	char b_page[GET_TYPE_SIZE(page)];
+	struct page page;
+
+	if (KDUMP_TYPE_GET(page, o_page, b_page)) {
+		page.valid = 0;
+		return page;
+	}
+
+	page.flags = kt_long_value(b_page + MEMBER_OFFSET(page, flags));
+	page.lru.next = kt_ptr_value(b_page + MEMBER_OFFSET(page, lru)
+					+ MEMBER_OFFSET(list_head, next));
+	page.lru.prev = kt_ptr_value(b_page + MEMBER_OFFSET(page, lru)
+					+ MEMBER_OFFSET(list_head, prev));
+	page.first_page = kt_ptr_value(b_page +
+					MEMBER_OFFSET(page, first_page));
+	page.valid = 1;
+
+	return page;
+}
+
+static inline struct page compound_head(struct page page)
+{
+	if (page.valid && PageTail(page))
+		return read_page(page.first_page);
+	return page;
+}
+
+static struct page virt_to_head_page(offset addr)
+{
+	struct page page;
+
+	page = read_page(virt_to_opage(addr));
+
+	return compound_head(page);
+}
+
+static int check_slab_obj(offset obj)
+{
+	struct page page;
+	offset o_cache, o_slab;
+	struct kmem_cache *cachep;
+	struct kmem_slab *slabp;
+	struct kmem_obj_ac *obj_ac;
+	struct kmem_ac *ac;
+	unsigned int idx;
+	offset obj_base;
+	unsigned int i, cnt = 0;
+	int free = 0;
+
+	page = virt_to_head_page(obj);
+
+	if (!page.valid) {
+		warning(_("unable to read struct page for object at %llx\n"),
+				obj);
+		return 0;
+	}
+
+	if (!PageSlab(page))
+		return 0;
+
+	o_cache = page.lru.next;
+	o_slab = page.lru.prev;
+	printf("pointer %llx is on slab %llx of cache %llx\n", obj, o_slab,
+								o_cache);
+
+	cachep = init_kmem_cache(o_cache);
+	init_kmem_cache_arrays(cachep);
+	slabp = init_kmem_slab(cachep, o_slab);
+
+	//TODO: kernel implementation uses reciprocal_divide, check?
+	idx = (obj - slabp->s_mem) / cachep->buffer_size;
+	obj_base = slabp->s_mem + idx * cachep->buffer_size;
+
+	printf("pointer is to object %llx with index %u\n", obj_base, idx);
+
+	i = slabp->free;
+	while (i != BUFCTL_END) {
+		cnt++;
+
+		if (cnt > cachep->num) {
+			printf("free bufctl cycle detected in slab %llx\n", o_slab);
+			break;
+		}
+		if (i > cachep->num) {
+			printf("bufctl value overflow (%d) in slab %llx\n", i, o_slab);
+			break;
+		}
+
+		if (i == idx)
+			free = 1;
+
+		i = slabp->bufctl[i];
+	}
+
+	printf("object is %s\n", free ? "free" : "allocated");
+
+	obj_ac = htab_find_with_hash(cachep->obj_ac, &obj, obj);
+
+	if (obj_ac) {
+		ac = obj_ac->ac;
+		printf("object is in array_cache %llx type %s[%d,%d]\n",
+			ac->offset, ac_type_names[ac->type], ac->at_node,
+			ac->for_node_cpu);
+	}
+
+	free_kmem_slab(slabp);
+
+	return 1;
+}
+
+static int init_memmap(void)
+{
+	const char *cfg;
+	offset o_mem_map;
+	offset o_page;
+	struct page page;
+	unsigned long long p_memmap;
+
+	//FIXME: why are all NULL?
+
+	cfg = kdump_vmcoreinfo_row(dump_ctx, "CONFIG_FLATMEM");
+	printf("CONFIG_FLATMEM=%s\n", cfg ? cfg : "(null)");
+
+	cfg = kdump_vmcoreinfo_row(dump_ctx, "CONFIG_DISCONTIGMEM");
+	printf("CONFIG_DISCONTIGMEM=%s\n", cfg ? cfg : "(null)");
+
+	cfg = kdump_vmcoreinfo_row(dump_ctx, "CONFIG_SPARSEMEM_VMEMMAP");
+	printf("CONFIG_SPARSEMEM_VMEMMAP=%s\n", cfg ? cfg : "(null)");
+
+	o_mem_map = get_symbol_value("mem_map");
+	printf("memmap: %llx\n", o_mem_map);
+
+	if (o_mem_map) {
+		p_memmap = kt_ptr_value_off(o_mem_map);
+		printf("memmap is pointer to: %llx\n", p_memmap);
+		if (p_memmap != -1)
+			memmap = p_memmap;
+	}
+
+/*
+	o_page = virt_to_opage(0xffff880138bedf40UL);
+	printf("ffff880138bedf40 is page %llx\n", o_page);
+
+	page = read_page(o_page);
+	printf("flags=%lx lru=(%llx,%llx) first_page=%llx\n",page.flags,
+			page.lru.next, page.lru.prev, page.first_page);
+	printf("PG_slab=%llx\n", get_symbol_value("PG_slab"));
+	printf("PageSlab(page)==%d\n", PageSlab(page));
+*/
+	return 0;
+}
+
 static int init_values(void);
 static int init_values(void)
 {
 	struct symbol *s;
 	char *b = NULL, *init_task = NULL, *task = NULL;
-	offset off, off_task, rsp, rip, _rsp;
+	offset off, o_task, rsp, rip, _rsp;
 	offset tasks;
+	offset o_tasks;
+	offset off_task;
 	offset stack;
 	offset o_init_task;
 	int state;
@@ -1108,6 +2266,7 @@ static int init_values(void)
 	int cnt = 0;
 	int pid_reserve;
 	struct task_info *task_info;
+	struct list_iter iter;
 
 	s = NULL;
 	
@@ -1141,58 +2300,59 @@ static int init_values(void)
 		goto error;
 	task = KDUMP_TYPE_ALLOC(task_struct);
 	if (!task) goto error;
+
 	if (KDUMP_TYPE_GET(task_struct, o_init_task, init_task))
 		goto error;
 	tasks = kt_ptr_value(init_task + MEMBER_OFFSET(task_struct,tasks));
+	o_tasks = o_init_task + MEMBER_OFFSET(task_struct, tasks);
 
 	i = 0;
-	off = 0;
 	pid_reserve = 50000;
 
 	print_thread_events = 0;
 	in = current_inferior();
 	inferior_appeared (in, 1);
 
-	list_head_for_each(tasks, init_task + MEMBER_OFFSET(task_struct,tasks), off) {
-		
+	list_for_each_from(iter, o_tasks) {
+
 		struct thread_info *info;
 		int pid;
 		ptid_t tt;
 		struct regcache *rc;
 		long long val;
 		offset main_tasks, mt;
-		
+		struct list_iter iter_thr;
+		offset o_threads;
 
 		//fprintf(stderr, __FILE__":%d: ok\n", __LINE__);
 		off_task = off - MEMBER_OFFSET(task_struct,tasks);
 		if (KDUMP_TYPE_GET(task_struct, off_task, task)) continue;
 
-		main_tasks = off_task;//kt_ptr_value(task + MEMBER_OFFSET(task_struct,thread_group));
+		o_task = iter.curr - MEMBER_OFFSET(task_struct, tasks);
+		o_threads = o_task + MEMBER_OFFSET(task_struct, thread_group);
+		list_for_each_from(iter_thr, o_threads) {
 
-		do {
-		//list_head_for_each(main_tasks, task + MEMBER_OFFSET(task_struct,thread_group), mt) {
-
-			//off_task = mt - MEMBER_OFFSET(task_struct,thread_group);
-			if (KDUMP_TYPE_GET(task_struct, off_task, task))  {
+			o_task = iter_thr.curr - MEMBER_OFFSET(task_struct,
+								thread_group);
+			if (KDUMP_TYPE_GET(task_struct, o_task, task))
 				continue;
-			}
-
-			if (add_task(off_task, &pid_reserve, task)) {
-
-			} else {
-				
-				printf_unfiltered(_("Loaded processes: %d\r"), ++cnt);
-			}
-			off_task = kt_ptr_value(task + MEMBER_OFFSET(task_struct, thread_group)) - MEMBER_OFFSET(task_struct, thread_group);
-			if (off_task == main_tasks) break;
 
-		} while (1);
+			if (!add_task(o_task, &pid_reserve, task))
+				printf_unfiltered(_("Loaded processes: %d\r"),
+									++cnt);
+		}
 	}
 
 	if (b) free(b);
 	if (init_task) free(init_task);
 
 	printf_unfiltered(_("Loaded processes: %d\n"), cnt);
+	init_memmap();
+
+//	check_kmem_caches();
+//	check_slab_obj(0xffff880138bedf40UL);
+//	check_slab_obj(0xffff8801359734c0UL);
+
 	return 0;
 error:
 	if (b) free(b);
@@ -1373,7 +2533,6 @@ core_detach (struct target_ops *ops, const char *args, int from_tty)
 		printf_filtered (_("No core file now.\n"));
 }
 
-static kdump_paddr_t transform_memory(kdump_paddr_t addr);
 static kdump_paddr_t transform_memory(kdump_paddr_t addr)
 {
 	kdump_paddr_t out;
@@ -1396,10 +2555,12 @@ kdump_xfer_partial (struct target_ops *ops, enum target_object object,
 	{
 		case TARGET_OBJECT_MEMORY:
 			offset = transform_memory((kdump_paddr_t)offset);
-			r = kdump_read(dump_ctx, (kdump_paddr_t)offset, (unsigned char*)readbuf, (size_t)len, KDUMP_PHYSADDR);
+			r = kdump_read(dump_ctx, KDUMP_KPHYSADDR, (kdump_paddr_t)offset, (unsigned char*)readbuf, (size_t)len);
 			if (r != len) {
-				error(_("Cannot read %lu bytes from %lx (%lld)!"), (size_t)len, (long unsigned int)offset, (long long)r);
-			} else
+				warning(_("Cannot read %lu bytes from %lx (%lld)!"),
+						(size_t)len, (long unsigned int)offset, (long long)r);
+				return TARGET_XFER_E_IO;
+			} else 
 				*xfered_len = len;
 
 			return TARGET_XFER_OK;
@@ -1797,7 +2958,9 @@ static void kdumpps_command(char *fn, int from_tty)
 		if (!task) continue;
 		if (task->cpu == -1) cpu[0] = '\0';
 		else snprintf(cpu, 5, "% 4d", task->cpu);
+#ifdef _DEBUG
 		printf_filtered(_("% 7d %llx %llx %llx %-4s %s\n"), task->pid, task->task_struct, task->ip, task->sp, cpu, tp->name);
+#endif
 	}
 }
 
-- 
2.7.0

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 4/4] Minor cleanups
  2016-01-31 21:45 Enable gdb to open Linux kernel dumps Ales Novak
@ 2016-01-31 21:45 ` Ales Novak
  2016-01-31 21:45 ` [PATCH 2/4] Add Jeff Mahoney's py-crash patches Ales Novak
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 31+ messages in thread
From: Ales Novak @ 2016-01-31 21:45 UTC (permalink / raw)
  To: gdb-patches; +Cc: Ales Novak

---
 gdb/kdump.c | 393 +++++++++++++++++++++++++++++++++---------------------------
 1 file changed, 215 insertions(+), 178 deletions(-)

diff --git a/gdb/kdump.c b/gdb/kdump.c
index e231559..5c1c5a7 100644
--- a/gdb/kdump.c
+++ b/gdb/kdump.c
@@ -159,7 +159,7 @@ typedef enum {
 	ARCH_NONE,
 	ARCH_X86_64,
 	ARCH_S390X,
-	ARCH_PPC64LE,
+	ARCH_PPC64,
 } t_arch;
 
 struct cpuinfo {
@@ -419,6 +419,23 @@ enum {
 	T_REF
 };
 
+//static int ppc64_retregs_active(struct task_info *task_info, struct regcache *rc, kdump_reg_t rsp, int cpu);
+
+struct t_kdump_arch {
+	char *kdident;
+	char *gdbident;
+	int flags;
+	t_arch arch;
+	int (*init_func)(const struct t_kdump_arch *, int *);
+	int (*retregs_active_func)(struct task_info *task_info,
+			struct regcache *rc, kdump_reg_t rsp, int cpu);
+	int (*retregs_scheduled_func)(struct task_info *task_info, 
+			struct regcache *rc, kdump_reg_t rsp);
+} ;
+
+static const struct t_kdump_arch *kdump_arch = NULL;
+
+
 static void free_task_info(struct private_thread_info *addr)
 {
 	struct task_info *ti = (struct task_info*)addr;
@@ -1191,6 +1208,169 @@ static int get_process_cpu(offset task)
 	return -1;
 }
 
+static int x86_retregs_active(struct task_info *task_info, struct regcache *rc, kdump_reg_t rsp, int cpu)
+{
+	kdump_reg_t rip, val, _b, *b = &_b;
+	long long regs[64];
+	kdump_reg_t reg;
+
+#ifdef _DEBUG
+	printf("task %p is running on %d\n", (void*)task_info->task_struct, cpu);
+#endif
+
+#define REG(en,mem) kdump_read_reg(dump_ctx, cpu, GET_REGISTER_OFFSET(mem), &reg); regcache_raw_supply(rc, en, &reg)
+
+	REG(reg_RSP,sp);
+	task_info->sp = reg;
+	REG(reg_RIP,ip);
+	task_info->ip = reg;
+	REG(reg_RAX,ax);
+	REG(reg_RCX,cx);
+	REG(reg_RDX,dx);
+	REG(reg_RBX,bx);
+	REG(reg_RBP,bp);
+	REG(reg_RSI,si);
+	REG(reg_RDI,di);
+	REG(reg_R8,r8);
+	REG(reg_R9,r9);
+	REG(reg_R10,r10);
+	REG(reg_R11,r11);
+	REG(reg_R12,r12);
+	REG(reg_R13,r13);
+	REG(reg_R14,r14);
+	REG(reg_R15,r15);
+	REG(reg_RFLAGS,flags);
+	REG(reg_ES,es);
+	REG(reg_CS,cs);
+	REG(reg_SS,ss);
+	REG(reg_DS,ds);
+	REG(reg_FS,fs);
+	REG(reg_GS,gs);
+#undef REG
+	return 0;
+}
+static int x86_retregs_scheduled(struct task_info *task_info, struct regcache *rc, kdump_reg_t rsp)
+{
+	kdump_reg_t rip, val, _b, *b = &_b;
+	long long regs[64];
+
+	if (KDUMP_TYPE_GET(_voidp, rsp, b)) return -2;
+	rip = kt_ptr_value(b);
+
+	/*
+	 * So we're gonna skip its stackframe
+	 * FIXME: use the size obtained from debuginfo
+	 */
+	rsp += 0x148;
+	if (target_read_raw_memory(rsp - 0x8 * (1 + 6), (void*)regs, 0x8 * 6))
+		warning(_("Could not read regs\n"));
+
+	regcache_raw_supply(rc, 15, &regs[5]);
+	regcache_raw_supply(rc, 14, &regs[4]);
+	regcache_raw_supply(rc, 13, &regs[3]);
+	regcache_raw_supply(rc, 12, &regs[2]);
+	regcache_raw_supply(rc, 6, &regs[1]);
+	regcache_raw_supply(rc, 3, &regs[0]);
+
+	KDUMP_TYPE_GET(_voidp, rsp, b);
+	rip = kt_ptr_value(b);
+	rsp += 8;
+
+	regcache_raw_supply(rc, 7, &rsp);
+	regcache_raw_supply(rc, 16, &rip);
+
+	task_info->sp = rsp;
+	task_info->ip = rip;
+
+	return 0;
+}
+static int ppc64_retregs_active(struct task_info *task_info, struct regcache *rc, kdump_reg_t rsp, int cpu)
+{
+	kdump_reg_t val;
+	kdump_reg_t reg;
+	int i;
+	long long regs[64];
+	for (i = 0; i < 32; i ++) {
+		kdump_read_reg(dump_ctx, cpu, i, &reg);
+		val = htobe64(reg);
+		regcache_raw_supply(rc, i, &val);
+	//	kdump_read_reg(dump_ctx, cpu, 32, &reg); regcache_raw_supply(rc, 32, &val);
+	//	kdump_read_reg(dump_ctx, cpu, 1, &reg); regcache_raw_supply(rc, 1, &val);
+	}
+	for (i = 32; i < 49; i ++) {
+		kdump_read_reg(dump_ctx, cpu, i, &reg);
+		val = htobe64(reg);
+		regcache_raw_supply(rc, i+32, &val);
+	}
+	kdump_read_reg(dump_ctx, cpu, 32, &reg);
+	task_info->ip = reg;
+	kdump_read_reg(dump_ctx, cpu, 1, &reg);
+	task_info->sp = reg;
+	for (i = 0; i < 129; i ++) {
+		val = i;
+	//	regcache_raw_supply(rc, i, &val);
+	}
+
+	return 0;
+}
+
+static int ppc64_retregs_scheduled(struct task_info *task_info, struct regcache *rc, kdump_reg_t rsp)
+{
+	return 0;
+}
+
+static int s390x_retregs_active(struct task_info *task_info, struct regcache *rc, kdump_reg_t rsp, int cpu)
+{
+	kdump_reg_t rip, val, _b, *b = &_b;
+
+	if (! KDUMP_TYPE_GET(_voidp, rsp+136, b))
+		rip = kt_ptr_value(b);
+	if (KDUMP_TYPE_GET(_voidp, rsp+144, b)) return -3;
+	rsp = kt_ptr_value(b);
+	task_info->sp = rsp;
+	task_info->ip = rip;
+
+	val = be64toh(rip);
+	regcache_raw_supply(rc, 1, &val);
+
+	return 0;
+}
+
+static int s390x_retregs_scheduled(struct task_info *task_info, struct regcache *rc, kdump_reg_t rsp)
+{
+	kdump_reg_t rip, val, _b, *b = &_b;
+
+	if (! KDUMP_TYPE_GET(_voidp, rsp+136, b))
+		rip = kt_ptr_value(b);
+	if (KDUMP_TYPE_GET(_voidp, rsp+144, b)) return -3;
+	rsp = kt_ptr_value(b);
+	task_info->sp = rsp;
+	task_info->ip = rip;
+
+	val = be64toh(rip);
+	regcache_raw_supply(rc, 1, &val);
+
+	if (! KDUMP_TYPE_GET(_voidp, rsp+136, b)) regcache_raw_supply(rc, S390_R14_REGNUM, b);
+	if (! KDUMP_TYPE_GET(_voidp, rsp+128, b)) regcache_raw_supply(rc, S390_R13_REGNUM, b);
+	if (! KDUMP_TYPE_GET(_voidp, rsp+120, b)) regcache_raw_supply(rc, S390_R12_REGNUM, b);
+	if (! KDUMP_TYPE_GET(_voidp, rsp+112, b)) regcache_raw_supply(rc, S390_R11_REGNUM, b);
+	if (! KDUMP_TYPE_GET(_voidp, rsp+104, b)) regcache_raw_supply(rc, S390_R10_REGNUM, b);
+	if (! KDUMP_TYPE_GET(_voidp, rsp+96, b)) regcache_raw_supply(rc, S390_R9_REGNUM, b);
+	if (! KDUMP_TYPE_GET(_voidp, rsp+88, b)) regcache_raw_supply(rc, S390_R8_REGNUM, b);
+	if (! KDUMP_TYPE_GET(_voidp, rsp+80, b)) regcache_raw_supply(rc, S390_R7_REGNUM, b);
+	if (! KDUMP_TYPE_GET(_voidp, rsp+72, b)) regcache_raw_supply(rc, S390_R6_REGNUM, b);
+	if (! KDUMP_TYPE_GET(_voidp, rsp+64, b)) regcache_raw_supply(rc, S390_R5_REGNUM, b);
+	if (! KDUMP_TYPE_GET(_voidp, rsp+56, b)) regcache_raw_supply(rc, S390_R4_REGNUM, b);
+	if (! KDUMP_TYPE_GET(_voidp, rsp+48, b)) regcache_raw_supply(rc, S390_R3_REGNUM, b);
+	if (! KDUMP_TYPE_GET(_voidp, rsp+40, b)) regcache_raw_supply(rc, S390_R2_REGNUM, b);
+	if (! KDUMP_TYPE_GET(_voidp, rsp+32, b)) regcache_raw_supply(rc, S390_R1_REGNUM, b);
+	
+	val = be64toh(rsp);
+	regcache_raw_supply(rc, S390_R15_REGNUM, &val);
+
+	return 0;
+}
+
 static int add_task(offset off_task, int *pid_reserve, char *task);
 static int add_task(offset off_task, int *pid_reserve, char *task)
 {
@@ -1202,19 +1382,16 @@ static int add_task(offset off_task, int *pid_reserve, char *task)
 	offset stack;
 	offset o_init_task;
 	int state;
-	int i, cpu;
+	int cpu;
 	int hashsize;
 	struct task_info *task_info;
-
 	struct thread_info *info;
 	int pid;
 	ptid_t tt;
 	struct regcache *rc;
-	long long val;
 
 	b = _b;
 
-
 	state = kt_int_value(task + MEMBER_OFFSET(task_struct,state));
 	pid = kt_int_value(task + MEMBER_OFFSET(task_struct,pid));
 	stack = kt_ptr_value(task + MEMBER_OFFSET(task_struct,stack));
@@ -1228,17 +1405,6 @@ static int add_task(offset off_task, int *pid_reserve, char *task)
 	task_info->pid = pid;
 	task_info->cpu = -1;
 
-	if (types.arch == ARCH_S390X) {
-		if (! KDUMP_TYPE_GET(_voidp, rsp+136, b))
-			rip = kt_ptr_value(b);
-		if (KDUMP_TYPE_GET(_voidp, rsp+144, b)) return -3;
-		rsp = kt_ptr_value(b);
-		task_info->sp = rsp;
-		task_info->ip = rip;
-	} else {
-		if (KDUMP_TYPE_GET(_voidp, rsp, b)) return -2;
-		rip = kt_ptr_value(b);
-	}
 #ifdef _DEBUG
 	fprintf(stdout, "TASK %llx,%llx,rsp=%llx,rip=%llx,pid=%d,state=%d,name=%s\n", off_task, stack, rsp, rip, pid, state, task + MEMBER_OFFSET(task_struct,comm));
 #endif
@@ -1257,146 +1423,16 @@ static int add_task(offset off_task, int *pid_reserve, char *task)
 	inferior_ptid = tt;
 	info->name = strdup(task + MEMBER_OFFSET(task_struct,comm));
 
-	val = 0;
-
 	rc = get_thread_regcache (tt);
 
-	if (types.arch == ARCH_S390X) {
-
-		if (((cpu = get_process_cpu(off_task)) != -1)) {
-#ifdef _DEBUG
-			printf("task %p is running on %d\n", (void*)task_info->task_struct, cpu);
-#endif
-		}
-		/*
-		 * TODO: implement retrieval of register values from lowcore
-		 */
-		val = be64toh(rip);
-		regcache_raw_supply(rc, 1, &val);
-
-		if (! KDUMP_TYPE_GET(_voidp, _rsp+136, b)) regcache_raw_supply(rc, S390_R14_REGNUM, b);
-		if (! KDUMP_TYPE_GET(_voidp, _rsp+128, b)) regcache_raw_supply(rc, S390_R13_REGNUM, b);
-		if (! KDUMP_TYPE_GET(_voidp, _rsp+120, b)) regcache_raw_supply(rc, S390_R12_REGNUM, b);
-		if (! KDUMP_TYPE_GET(_voidp, _rsp+112, b)) regcache_raw_supply(rc, S390_R11_REGNUM, b);
-		if (! KDUMP_TYPE_GET(_voidp, _rsp+104, b)) regcache_raw_supply(rc, S390_R10_REGNUM, b);
-		if (! KDUMP_TYPE_GET(_voidp, _rsp+96, b)) regcache_raw_supply(rc, S390_R9_REGNUM, b);
-		if (! KDUMP_TYPE_GET(_voidp, _rsp+88, b)) regcache_raw_supply(rc, S390_R8_REGNUM, b);
-		if (! KDUMP_TYPE_GET(_voidp, _rsp+80, b)) regcache_raw_supply(rc, S390_R7_REGNUM, b);
-		if (! KDUMP_TYPE_GET(_voidp, _rsp+72, b)) regcache_raw_supply(rc, S390_R6_REGNUM, b);
-		if (! KDUMP_TYPE_GET(_voidp, _rsp+64, b)) regcache_raw_supply(rc, S390_R5_REGNUM, b);
-		if (! KDUMP_TYPE_GET(_voidp, _rsp+56, b)) regcache_raw_supply(rc, S390_R4_REGNUM, b);
-		if (! KDUMP_TYPE_GET(_voidp, _rsp+48, b)) regcache_raw_supply(rc, S390_R3_REGNUM, b);
-		if (! KDUMP_TYPE_GET(_voidp, _rsp+40, b)) regcache_raw_supply(rc, S390_R2_REGNUM, b);
-		if (! KDUMP_TYPE_GET(_voidp, _rsp+32, b)) regcache_raw_supply(rc, S390_R1_REGNUM, b);
-		
-		val = be64toh(rsp);
-		regcache_raw_supply(rc, S390_R15_REGNUM, &val);
-	} else if (types.arch == ARCH_X86_64) {
-		/*
-		 * The task is not running - e.g. crash would show it's stuck in schedule()
-		 * Yet schedule() is not on its stack.
-		 *
-		 */
-		cpu = 0;
-		if (((cpu = get_process_cpu(off_task)) == -1)) {
-			long long regs[64];
-
-			/*
-			 * So we're gonna skip its stackframe
-			 * FIXME: use the size obtained from debuginfo
-			 */
-			rsp += 0x148;
-			if (target_read_raw_memory(rsp - 0x8 * (1 + 6), (void*)regs, 0x8 * 6))
-				warning(_("Could not read regs\n"));
-
-			regcache_raw_supply(rc, 15, &regs[5]);
-			regcache_raw_supply(rc, 14, &regs[4]);
-			regcache_raw_supply(rc, 13, &regs[3]);
-			regcache_raw_supply(rc, 12, &regs[2]);
-			regcache_raw_supply(rc, 6, &regs[1]);
-			regcache_raw_supply(rc, 3, &regs[0]);
-
-			KDUMP_TYPE_GET(_voidp, rsp, b);
-			rip = kt_ptr_value(b);
-			rsp += 8;
-
-			regcache_raw_supply(rc, 7, &rsp);
-			regcache_raw_supply(rc, 16, &rip);
-
-			task_info->sp = rsp;
-			task_info->ip = rip;
-		} else {
-			kdump_reg_t reg;
-
-			task_info->cpu = cpu;
-#ifdef _DEBUG
-			printf("task %p is running on %d\n", (void*)task_info->task_struct, cpu);
-#endif
-
-#define REG(en,mem) kdump_read_reg(dump_ctx, cpu, GET_REGISTER_OFFSET(mem), &reg); regcache_raw_supply(rc, en, &reg)
-		
-			REG(reg_RSP,sp);
-			task_info->sp = reg;
-			REG(reg_RIP,ip);
-			task_info->ip = reg;
-			REG(reg_RAX,ax);
-			REG(reg_RCX,cx);
-			REG(reg_RDX,dx);
-			REG(reg_RBX,bx);
-			REG(reg_RBP,bp);
-			REG(reg_RSI,si);
-			REG(reg_RDI,di);
-			REG(reg_R8,r8);
-			REG(reg_R9,r9);
-			REG(reg_R10,r10);
-			REG(reg_R11,r11);
-			REG(reg_R12,r12);
-			REG(reg_R13,r13);
-			REG(reg_R14,r14);
-			REG(reg_R15,r15);
-			REG(reg_RFLAGS,flags);
-			REG(reg_ES,es);
-			REG(reg_CS,cs);
-			REG(reg_SS,ss);
-			REG(reg_DS,ds);
-			REG(reg_FS,fs);
-			REG(reg_GS,gs);
-#undef REG
+	if (((cpu = get_process_cpu(off_task)) != -1)) {
+		task_info->cpu = cpu;
+		if (kdump_arch->retregs_active_func(task_info, rc, rsp, cpu)) {
+			warning("Cannot retrieve registers of active task %d\n", pid);
 		}
-	} else if (types.arch == ARCH_PPC64LE) {
-		if (((cpu = get_process_cpu(off_task)) == -1)) {
-			val = 789;
-			regcache_raw_supply(rc, 1, &val);
-			val = 456;
-			regcache_raw_supply(rc, 64, &val);
-			for (i = 0; i < 169; i ++) {
-				val = htobe64(i);
-				regcache_raw_supply(rc, i, &val);
-			}
-		} else {
-			kdump_reg_t reg;
-			task_info->cpu = cpu;
-			long long regs[64];
-			for (i = 0; i < 32; i ++) {
-				kdump_read_reg(dump_ctx, cpu, i, &reg);
-				val = htobe64(reg);
-				regcache_raw_supply(rc, i, &val);
-			//	kdump_read_reg(dump_ctx, cpu, 32, &reg); regcache_raw_supply(rc, 32, &val);
-			//	kdump_read_reg(dump_ctx, cpu, 1, &reg); regcache_raw_supply(rc, 1, &val);
-			}
-			for (i = 32; i < 49; i ++) {
-				kdump_read_reg(dump_ctx, cpu, i, &reg);
-				val = htobe64(reg);
-				regcache_raw_supply(rc, i+32, &val);
-			}
-			kdump_read_reg(dump_ctx, cpu, 32, &reg);
-			task_info->ip = reg;
-			kdump_read_reg(dump_ctx, cpu, 1, &reg);
-			task_info->sp = reg;
-			for (i = 0; i < 129; i ++) {
-				val = i;
-			//	regcache_raw_supply(rc, i, &val);
-			}
+	} else {
+		if (kdump_arch->retregs_scheduled_func(task_info, rc, rsp)) {
+			warning("Cannot retrieve registers of scheduled task %d\n", pid);
 		}
 	}
 
@@ -2091,9 +2127,6 @@ static void check_kmem_caches(void)
 	}
 }
 
-
-
-
 static struct page read_page(offset o_page)
 {
 	char b_page[GET_TYPE_SIZE(page)];
@@ -2217,20 +2250,30 @@ static int init_memmap(void)
 	//FIXME: why are all NULL?
 
 	cfg = kdump_vmcoreinfo_row(dump_ctx, "CONFIG_FLATMEM");
+#ifdef _DEBUG
 	printf("CONFIG_FLATMEM=%s\n", cfg ? cfg : "(null)");
+#endif
 
 	cfg = kdump_vmcoreinfo_row(dump_ctx, "CONFIG_DISCONTIGMEM");
+#ifdef _DEBUG
 	printf("CONFIG_DISCONTIGMEM=%s\n", cfg ? cfg : "(null)");
+#endif
 
 	cfg = kdump_vmcoreinfo_row(dump_ctx, "CONFIG_SPARSEMEM_VMEMMAP");
+#ifdef _DEBUG
 	printf("CONFIG_SPARSEMEM_VMEMMAP=%s\n", cfg ? cfg : "(null)");
+#endif
 
 	o_mem_map = get_symbol_value("mem_map");
+#ifdef _DEBUG
 	printf("memmap: %llx\n", o_mem_map);
+#endif
 
 	if (o_mem_map) {
 		p_memmap = kt_ptr_value_off(o_mem_map);
+#ifdef _DEBUG
 		printf("memmap is pointer to: %llx\n", p_memmap);
+#endif
 		if (p_memmap != -1)
 			memmap = p_memmap;
 	}
@@ -2269,7 +2312,7 @@ static int init_values(void)
 	struct list_iter iter;
 
 	s = NULL;
-	
+
 	b = KDUMP_TYPE_ALLOC(_voidp);
 	if (!b) goto error;
 
@@ -2315,16 +2358,13 @@ static int init_values(void)
 
 	list_for_each_from(iter, o_tasks) {
 
-		struct thread_info *info;
 		int pid;
 		ptid_t tt;
 		struct regcache *rc;
 		long long val;
-		offset main_tasks, mt;
 		struct list_iter iter_thr;
 		offset o_threads;
 
-		//fprintf(stderr, __FILE__":%d: ok\n", __LINE__);
 		off_task = off - MEMBER_OFFSET(task_struct,tasks);
 		if (KDUMP_TYPE_GET(task_struct, off_task, task)) continue;
 
@@ -2349,9 +2389,11 @@ static int init_values(void)
 	printf_unfiltered(_("Loaded processes: %d\n"), cnt);
 	init_memmap();
 
-//	check_kmem_caches();
-//	check_slab_obj(0xffff880138bedf40UL);
-//	check_slab_obj(0xffff8801359734c0UL);
+#ifdef _DEBUG
+	check_kmem_caches();
+	check_slab_obj(0xffff880138bedf40UL);
+	check_slab_obj(0xffff8801359734c0UL);
+#endif
 
 	return 0;
 error:
@@ -2361,14 +2403,6 @@ error:
 	return 1;
 }
 
-struct t_kdump_arch {
-	char *kdident;
-	char *gdbident;
-	int flags;
-	t_arch arch;
-	int (*init_func)(const struct t_kdump_arch *, int *);
-} ;
-
 static int kdump_ppc64_init(const struct t_kdump_arch *a, int *flags)
 {
 	*flags = F_BIG_ENDIAN;
@@ -2376,9 +2410,12 @@ static int kdump_ppc64_init(const struct t_kdump_arch *a, int *flags)
 }
 
 static const struct t_kdump_arch archlist[] = {
-	{"x86_64", "i386:x86-64",      F_LITTLE_ENDIAN, ARCH_X86_64,  NULL},
-	{"s390x",  "s390:64-bit",      F_BIG_ENDIAN,    ARCH_S390X,   NULL},
-	{"ppc64",  "powerpc:common64", F_UNKN_ENDIAN,   ARCH_PPC64LE, kdump_ppc64_init},
+	{"x86_64", "i386:x86-64",      F_LITTLE_ENDIAN, ARCH_X86_64,  NULL,
+		x86_retregs_active, x86_retregs_scheduled},
+	{"s390x",  "s390:64-bit",      F_BIG_ENDIAN,    ARCH_S390X,   NULL,
+		s390x_retregs_active, s390x_retregs_scheduled},
+	{"ppc64",  "powerpc:common64", F_UNKN_ENDIAN,   ARCH_PPC64, kdump_ppc64_init,
+		ppc64_retregs_active, ppc64_retregs_scheduled},
 	{NULL}
 };
 
@@ -2416,6 +2453,7 @@ static int kdump_do_init(void)
 	gai.bfd_arch_info = ait;
 	garch = gdbarch_find_by_info(gai);
 	kdump_gdbarch = garch;
+	kdump_arch = a;
 #ifdef _DEBUG
 	fprintf(stderr, "arch=%s,ait=%p,garch=%p\n", selected_architecture_name(), ait, garch);
 #endif
@@ -2423,6 +2461,7 @@ static int kdump_do_init(void)
 	if (a->init_func) {
 		if ((ret = a->init_func(a, &flags)) != 0) {
 			error(_("Architecture %s init_func()=%d"), a->kdident, ret);
+			kdump_arch = NULL;
 			return -5;
 		}
 	}
@@ -2430,7 +2469,7 @@ static int kdump_do_init(void)
 	inf = current_inferior();
 
 	types.arch = a->arch;
-	
+
 	if (init_types(flags)) {
 		warning(_("kdump: Cannot init types!\n"));
 	}
@@ -2935,7 +2974,7 @@ static void kdumpmodules_command (char *filename, int from_tty)
 				  section_addrs, flags);
 		add_target_sections_of_objfile (objf);
 	}
-	
+
 	error:
 
 	if (v) free(v);
@@ -2958,9 +2997,7 @@ static void kdumpps_command(char *fn, int from_tty)
 		if (!task) continue;
 		if (task->cpu == -1) cpu[0] = '\0';
 		else snprintf(cpu, 5, "% 4d", task->cpu);
-#ifdef _DEBUG
 		printf_filtered(_("% 7d %llx %llx %llx %-4s %s\n"), task->pid, task->task_struct, task->ip, task->sp, cpu, tp->name);
-#endif
 	}
 }
 
-- 
2.7.0

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Enable gdb to open Linux kernel dumps
@ 2016-01-31 21:45 Ales Novak
  2016-01-31 21:45 ` [PATCH 4/4] Minor cleanups Ales Novak
                   ` (4 more replies)
  0 siblings, 5 replies; 31+ messages in thread
From: Ales Novak @ 2016-01-31 21:45 UTC (permalink / raw)
  To: gdb-patches

Following patches are adding basic ability to access Linux kernel
dumps using the libkdumpfile library. They're creating new target
"kdump", so all one has to do is to provide appropriate debuginfo and
then run "target kdump /path/to/vmcore".

The tasks of the dumped kernel are mapped to threads in gdb. 

Except for that, there's a code adding understanding of Linux SLAB
memory allocator, which means we can tell for the given address to
which SLAB does it belong, or list objects for give SLAB name - and
more.

Patches are against "gdb-7.10-release" (but will apply elsewhere). 

Note: registers of task are fetched accordingly - either from the dump
metadata (the active tasks) or from their stacks. It should be noted
that as this mechanism varies amongst the kernel versions and
configurations, my naive implementation currently covers only the
dumps I encounter, handling of different kernel versions is to be
added.

In the near future, our plan is to remove the clumsy C-code handling
this and reimplement it in Python - only the binding to certain gdb
structures (e.g. thread, regcache) has to be added. A colleague of
mine is already working on that.

The github home of these patches is at:

https://github.com/alesax/gdb-kdump/tree/for-next

libkdumpfile lives at:

https://github.com/ptesarik/libkdumpfile

Fork adding the SLAB support lives at:

https://github.com/tehcaster/gdb-kdump/tree/slab-support


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 2/4] Add Jeff Mahoney's py-crash patches.
  2016-01-31 21:45 Enable gdb to open Linux kernel dumps Ales Novak
  2016-01-31 21:45 ` [PATCH 4/4] Minor cleanups Ales Novak
@ 2016-01-31 21:45 ` Ales Novak
  2016-02-01 12:35   ` Kieran Bingham
  2016-02-01 22:23   ` Doug Evans
  2016-01-31 21:45 ` [PATCH 3/4] Add SLAB allocator understanding Ales Novak
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 31+ messages in thread
From: Ales Novak @ 2016-01-31 21:45 UTC (permalink / raw)
  To: gdb-patches; +Cc: Ales Novak

---
 gdb/Makefile.in              |  12 ++
 gdb/python/py-minsymbol.c    | 353 +++++++++++++++++++++++++++++++++++++
 gdb/python/py-objfile.c      |  29 +++-
 gdb/python/py-section.c      | 401 +++++++++++++++++++++++++++++++++++++++++++
 gdb/python/py-symbol.c       |  52 ++++--
 gdb/python/python-internal.h |  14 ++
 gdb/python/python.c          |   7 +-
 7 files changed, 853 insertions(+), 15 deletions(-)
 create mode 100644 gdb/python/py-minsymbol.c
 create mode 100644 gdb/python/py-section.c

diff --git a/gdb/Makefile.in b/gdb/Makefile.in
index 3c7518a..751de4d 100644
--- a/gdb/Makefile.in
+++ b/gdb/Makefile.in
@@ -398,11 +398,13 @@ SUBDIR_PYTHON_OBS = \
 	py-infthread.o \
 	py-lazy-string.o \
 	py-linetable.o \
+	py-minsymbol.o \
 	py-newobjfileevent.o \
 	py-objfile.o \
 	py-param.o \
 	py-prettyprint.o \
 	py-progspace.o \
+	py-section.o \
 	py-signalevent.o \
 	py-stopevent.o \
 	py-symbol.o \
@@ -438,11 +440,13 @@ SUBDIR_PYTHON_SRCS = \
 	python/py-infthread.c \
 	python/py-lazy-string.c \
 	python/py-linetable.c \
+	python/py-minsymbol.c \
 	python/py-newobjfileevent.c \
 	python/py-objfile.c \
 	python/py-param.c \
 	python/py-prettyprint.c \
 	python/py-progspace.c \
+	python/py-section.c \
 	python/py-signalevent.c \
 	python/py-stopevent.c \
 	python/py-symbol.c \
@@ -2607,6 +2611,10 @@ py-linetable.o: $(srcdir)/python/py-linetable.c
 	$(COMPILE) $(PYTHON_CFLAGS) $(srcdir)/python/py-linetable.c
 	$(POSTCOMPILE)
 
+py-minsymbol.o: $(srcdir)/python/py-minsymbol.c
+	$(COMPILE) $(PYTHON_CFLAGS) $(srcdir)/python/py-minsymbol.c
+	$(POSTCOMPILE)
+
 py-newobjfileevent.o: $(srcdir)/python/py-newobjfileevent.c
 	$(COMPILE) $(PYTHON_CFLAGS) $(srcdir)/python/py-newobjfileevent.c
 	$(POSTCOMPILE)
@@ -2627,6 +2635,10 @@ py-progspace.o: $(srcdir)/python/py-progspace.c
 	$(COMPILE) $(PYTHON_CFLAGS) $(srcdir)/python/py-progspace.c
 	$(POSTCOMPILE)
 
+py-section.o: $(srcdir)/python/py-section.c
+	$(COMPILE) $(PYTHON_CFLAGS) $(srcdir)/python/py-section.c
+	$(POSTCOMPILE)
+
 py-signalevent.o: $(srcdir)/python/py-signalevent.c
 	$(COMPILE) $(PYTHON_CFLAGS) $(srcdir)/python/py-signalevent.c
 	$(POSTCOMPILE)
diff --git a/gdb/python/py-minsymbol.c b/gdb/python/py-minsymbol.c
new file mode 100644
index 0000000..efff59da
--- /dev/null
+++ b/gdb/python/py-minsymbol.c
@@ -0,0 +1,353 @@
+/* Python interface to minsymbols.
+
+   Copyright (C) 2008-2013 Free Software Foundation, Inc.
+
+   This file is part of GDB.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+#include "defs.h"
+#include "block.h"
+#include "exceptions.h"
+#include "frame.h"
+#include "symtab.h"
+#include "python-internal.h"
+#include "objfiles.h"
+#include "value.h"
+
+extern PyTypeObject minsym_object_type;
+
+typedef struct msympy_symbol_object {
+  PyObject_HEAD
+  /* The GDB minimal_symbol structure this object is wrapping.  */
+  struct minimal_symbol *minsym;
+
+  struct type *type;
+  /* A symbol object is associated with an objfile, so keep track with
+     doubly-linked list, rooted in the objfile.  This lets us
+     invalidate the underlying struct minimal_symbol when the objfile is
+     deleted.  */
+  struct msympy_symbol_object *prev;
+  struct msympy_symbol_object *next;
+} minsym_object;
+
+PyObject *minsym_to_minsym_object (struct minimal_symbol *minsym);
+struct minimal_symbol *minsym_object_to_minsym (PyObject *obj);
+/* Require a valid symbol.  All access to minsym_object->symbol should be
+   gated by this call.  */
+#define MSYMPY_REQUIRE_VALID(minsym_obj, minsym)	\
+  do {							\
+    minsym = minsym_object_to_minsym (minsym_obj);	\
+    if (minsym == NULL)				\
+      {							\
+	PyErr_SetString (PyExc_RuntimeError,		\
+			 _("MiniSymbol is invalid."));	\
+	return NULL;					\
+      }							\
+  } while (0)
+
+static PyObject *
+msympy_str (PyObject *self)
+{
+  PyObject *result;
+  struct minimal_symbol *minsym = NULL;
+
+  MSYMPY_REQUIRE_VALID (self, minsym);
+
+  result = PyString_FromString (MSYMBOL_PRINT_NAME (minsym));
+
+  return result;
+}
+
+static PyObject *
+msympy_get_name (PyObject *self, void *closure)
+{
+  struct minimal_symbol *minsym = NULL;
+
+  MSYMPY_REQUIRE_VALID (self, minsym);
+
+  return PyString_FromString (MSYMBOL_NATURAL_NAME (minsym));
+}
+
+static PyObject *
+msympy_get_file_name (PyObject *self, void *closure)
+{
+  struct minimal_symbol *minsym = NULL;
+
+  MSYMPY_REQUIRE_VALID (self, minsym);
+
+  return PyString_FromString (minsym->filename);
+}
+
+static PyObject *
+msympy_get_linkage_name (PyObject *self, void *closure)
+{
+  struct minimal_symbol *minsym = NULL;
+
+  MSYMPY_REQUIRE_VALID (self, minsym);
+
+  return PyString_FromString (MSYMBOL_LINKAGE_NAME (minsym));
+}
+
+static PyObject *
+msympy_get_print_name (PyObject *self, void *closure)
+{
+  struct minimal_symbol *minsym = NULL;
+
+  MSYMPY_REQUIRE_VALID (self, minsym);
+
+  return msympy_str (self);
+}
+
+static PyObject *
+msympy_is_valid (PyObject *self, PyObject *args)
+{
+  struct minimal_symbol *minsym = NULL;
+
+  minsym = minsym_object_to_minsym (self);
+  if (minsym == NULL)
+    Py_RETURN_FALSE;
+
+  Py_RETURN_TRUE;
+}
+
+/* Implementation of gdb.MiniSymbol.value (self) -> gdb.Value.  Returns
+   the value of the symbol, or an error in various circumstances.  */
+
+static PyObject *
+msympy_value (PyObject *self, PyObject *args)
+{
+  minsym_object *minsym_obj = (minsym_object *)self;
+  struct minimal_symbol *minsym = NULL;
+  struct value *value = NULL;
+  volatile struct gdb_exception except;
+
+  if (!PyArg_ParseTuple (args, ""))
+    return NULL;
+
+  MSYMPY_REQUIRE_VALID (self, minsym);
+  TRY
+    {
+      value = value_from_ulongest(minsym_obj->type,
+				  MSYMBOL_VALUE_RAW_ADDRESS(minsym));
+      if (value)
+	set_value_address(value, MSYMBOL_VALUE_RAW_ADDRESS(minsym));
+    }CATCH (except, RETURN_MASK_ALL) {
+	GDB_PY_HANDLE_EXCEPTION (except);
+    } END_CATCH
+  
+
+  return value_to_value_object (value);
+}
+
+/* Given a symbol, and a minsym_object that has previously been
+   allocated and initialized, populate the minsym_object with the
+   struct minimal_symbol data.  Also, register the minsym_object life-cycle
+   with the life-cycle of the object file associated with this
+   symbol, if needed.  */
+static void
+set_symbol (minsym_object *obj, struct minimal_symbol *minsym)
+{
+  obj->minsym = minsym;
+  switch (minsym->type) {
+  case mst_text:
+  case mst_solib_trampoline:
+  case mst_file_text:
+  case mst_text_gnu_ifunc:
+  case mst_slot_got_plt:
+    obj->type = builtin_type(python_gdbarch)->builtin_func_ptr;
+    break;
+
+  case mst_data:
+  case mst_abs:
+  case mst_file_data:
+  case mst_file_bss:
+    obj->type = builtin_type(python_gdbarch)->builtin_data_ptr;
+    break;
+
+  case mst_unknown:
+  default:
+    obj->type = builtin_type(python_gdbarch)->builtin_void;
+    break;
+  }
+
+  obj->prev = NULL;
+  obj->next = NULL;
+}
+
+/* Create a new symbol object (gdb.MiniSymbol) that encapsulates the struct
+   symbol object from GDB.  */
+PyObject *
+minsym_to_minsym_object (struct minimal_symbol *minsym)
+{
+  minsym_object *msym_obj;
+
+  msym_obj = PyObject_New (minsym_object, &minsym_object_type);
+  if (msym_obj)
+    set_symbol (msym_obj, minsym);
+
+  return (PyObject *) msym_obj;
+}
+
+/* Return the symbol that is wrapped by this symbol object.  */
+struct minimal_symbol *
+minsym_object_to_minsym (PyObject *obj)
+{
+  if (! PyObject_TypeCheck (obj, &minsym_object_type))
+    return NULL;
+  return ((minsym_object *) obj)->minsym;
+}
+
+static void
+msympy_dealloc (PyObject *obj)
+{
+  minsym_object *msym_obj = (minsym_object *) obj;
+
+  if (msym_obj->prev)
+    msym_obj->prev->next = msym_obj->next;
+  if (msym_obj->next)
+    msym_obj->next->prev = msym_obj->prev;
+  msym_obj->minsym = NULL;
+}
+
+/* Implementation of
+   gdb.lookup_minimal_symbol (name) -> symbol or None.  */
+
+PyObject *
+gdbpy_lookup_minimal_symbol (PyObject *self, PyObject *args, PyObject *kw)
+{
+  int domain = VAR_DOMAIN;
+  const char *name;
+  static char *keywords[] = { "name", NULL };
+  struct bound_minimal_symbol bound_minsym;
+  struct minimal_symbol *minsym = NULL;
+  PyObject *msym_obj;
+  volatile struct gdb_exception except;
+
+  if (!PyArg_ParseTupleAndKeywords (args, kw, "s|", keywords, &name))
+    return NULL;
+
+  TRY
+    {
+      bound_minsym = lookup_minimal_symbol (name, NULL, NULL);
+    } CATCH (except, RETURN_MASK_ALL) {
+  GDB_PY_HANDLE_EXCEPTION (except);
+
+  } END_CATCH
+
+  if (minsym)
+    {
+      msym_obj = minsym_to_minsym_object (bound_minsym.minsym);
+      if (!msym_obj)
+	return NULL;
+    }
+  else
+    {
+      msym_obj = Py_None;
+      Py_INCREF (Py_None);
+    }
+
+  return msym_obj;
+}
+
+int
+gdbpy_initialize_minsymbols (void)
+{
+  if (PyType_Ready (&minsym_object_type) < 0)
+    return -1;
+
+  if (PyModule_AddIntConstant (gdb_module, "MINSYMBOL_TYPE_UNKNOWN",
+			       mst_unknown) < 0
+  || PyModule_AddIntConstant (gdb_module, "MINSYMBOL_TYPE_TEXT", mst_text) < 0
+  || PyModule_AddIntConstant (gdb_module, "MINSYMBOL_TYPE_TEXT_GNU_IFUNC",
+			      mst_text_gnu_ifunc) < 0
+  || PyModule_AddIntConstant (gdb_module, "MINSYMBOL_TYPE_SLOT_GOT_PLT",
+			      mst_slot_got_plt) < 0
+  || PyModule_AddIntConstant (gdb_module, "MINSYMBOL_TYPE_DATA", mst_data) < 0
+  || PyModule_AddIntConstant (gdb_module, "MINSYMBOL_TYPE_BSS", mst_bss) < 0
+  || PyModule_AddIntConstant (gdb_module, "MINSYMBOL_TYPE_ABS", mst_abs) < 0
+  || PyModule_AddIntConstant (gdb_module, "MINSYMBOL_TYPE_SOLIB_TRAMPOLINE",
+			      mst_solib_trampoline) < 0
+  || PyModule_AddIntConstant (gdb_module, "MINSYMBOL_TYPE_FILE_TEXT",
+			      mst_file_text) < 0
+  || PyModule_AddIntConstant (gdb_module, "MINSYMBOL_TYPE_FILE_DATA",
+			      mst_file_data) < 0
+  || PyModule_AddIntConstant (gdb_module, "MINSYMBOL_TYPE_FILE_BSS",
+			      mst_file_bss) < 0)
+    return -1;
+
+  return gdb_pymodule_addobject (gdb_module, "MiniSymbol",
+				 (PyObject *) &minsym_object_type);
+}
+
+\f
+
+static PyGetSetDef minsym_object_getset[] = {
+  { "name", msympy_get_name, NULL,
+    "Name of the symbol, as it appears in the source code.", NULL },
+  { "linkage_name", msympy_get_linkage_name, NULL,
+    "Name of the symbol, as used by the linker (i.e., may be mangled).",
+    NULL },
+  { "filename", msympy_get_file_name, NULL,
+    "Name of source file the symbol is in. Only applies for mst_file_*.",
+    NULL },
+  { "print_name", msympy_get_print_name, NULL,
+    "Name of the symbol in a form suitable for output.\n\
+This is either name or linkage_name, depending on whether the user asked GDB\n\
+to display demangled or mangled names.", NULL },
+  { NULL }  /* Sentinel */
+};
+
+static PyMethodDef minsym_object_methods[] = {
+  { "is_valid", msympy_is_valid, METH_NOARGS,
+    "is_valid () -> Boolean.\n\
+Return true if this symbol is valid, false if not." },
+  { "value", msympy_value, METH_VARARGS,
+    "value ([frame]) -> gdb.Value\n\
+Return the value of the symbol." },
+  {NULL}  /* Sentinel */
+};
+
+PyTypeObject minsym_object_type = {
+  PyVarObject_HEAD_INIT (NULL, 0)
+  "gdb.MiniSymbol",			  /*tp_name*/
+  sizeof (minsym_object),	  /*tp_basicsize*/
+  0,				  /*tp_itemsize*/
+  msympy_dealloc,		  /*tp_dealloc*/
+  0,				  /*tp_print*/
+  0,				  /*tp_getattr*/
+  0,				  /*tp_setattr*/
+  0,				  /*tp_compare*/
+  0,				  /*tp_repr*/
+  0,				  /*tp_as_number*/
+  0,				  /*tp_as_sequence*/
+  0,				  /*tp_as_mapping*/
+  0,				  /*tp_hash */
+  0,				  /*tp_call*/
+  msympy_str,			  /*tp_str*/
+  0,				  /*tp_getattro*/
+  0,				  /*tp_setattro*/
+  0,				  /*tp_as_buffer*/
+  Py_TPFLAGS_DEFAULT,		  /*tp_flags*/
+  "GDB minimal symbol object",	  /*tp_doc */
+  0,				  /*tp_traverse */
+  0,				  /*tp_clear */
+  0,				  /*tp_richcompare */
+  0,				  /*tp_weaklistoffset */
+  0,				  /*tp_iter */
+  0,				  /*tp_iternext */
+  minsym_object_methods,	  /*tp_methods */
+  0,				  /*tp_members */
+  minsym_object_getset		  /*tp_getset */
+};
diff --git a/gdb/python/py-objfile.c b/gdb/python/py-objfile.c
index 5dc9ae6..498819b 100644
--- a/gdb/python/py-objfile.c
+++ b/gdb/python/py-objfile.c
@@ -25,7 +25,7 @@
 #include "build-id.h"
 #include "symtab.h"
 
-typedef struct
+typedef struct objfile_object
 {
   PyObject_HEAD
 
@@ -653,6 +653,31 @@ objfile_to_objfile_object (struct objfile *objfile)
   return (PyObject *) object;
 }
 
+static PyObject *
+objfpy_get_sections (PyObject *self, void *closure)
+{
+  objfile_object *obj = (objfile_object *) self;
+  PyObject *dict;
+  asection *section = obj->objfile->sections->the_bfd_section;
+
+  dict = PyDict_New();
+  if (!dict)
+    return NULL;
+
+  while (section) {
+    PyObject *sec = section_to_section_object(section, obj->objfile);
+    if (!sec) {
+      PyObject_Del(dict);
+      return NULL;
+    }
+
+    PyDict_SetItemString(dict, section->name, sec);
+    section = section->next;
+  }
+
+  return PyDictProxy_New(dict);
+}
+
 int
 gdbpy_initialize_objfile (void)
 {
@@ -707,6 +732,8 @@ static PyGetSetDef objfile_getset[] =
     "Type printers.", NULL },
   { "xmethods", objfpy_get_xmethods, NULL,
     "Debug methods.", NULL },
+  { "sections", objfpy_get_sections, NULL,
+    "The sections that make up the objfile.", NULL },
   { NULL }
 };
 
diff --git a/gdb/python/py-section.c b/gdb/python/py-section.c
new file mode 100644
index 0000000..985c69c
--- /dev/null
+++ b/gdb/python/py-section.c
@@ -0,0 +1,401 @@
+/* Python interface to sections.
+
+   Copyright (C) 2008-2013 Free Software Foundation, Inc.
+
+   This file is part of GDB.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+#include "defs.h"
+#include "block.h"
+#include "exceptions.h"
+#include "frame.h"
+#include "symtab.h"
+#include "python-internal.h"
+#include "objfiles.h"
+
+typedef struct secpy_section_object {
+  PyObject_HEAD
+  asection *section;
+  struct objfile *objfile;
+  /* The GDB section structure this object is wrapping.  */
+  /* A section object is associated with an objfile, so keep track with
+     doubly-linked list, rooted in the objfile.  This lets us
+     invalidate the underlying section when the objfile is
+     deleted.  */
+  struct secpy_section_object *prev;
+  struct secpy_section_object *next;
+} section_object;
+
+/* Require a valid section.  All access to section_object->section should be
+   gated by this call.  */
+#define SYMPY_REQUIRE_VALID(section_obj, section)		\
+  do {							\
+    section = section_object_to_section (section_obj);	\
+    if (section == NULL)					\
+      {							\
+	PyErr_SetString (PyExc_RuntimeError,		\
+			 _("Section is invalid."));	\
+	return NULL;					\
+      }							\
+  } while (0)
+
+static const struct objfile_data *secpy_objfile_data_key;
+
+static PyObject *
+secpy_str (PyObject *self)
+{
+  PyObject *result;
+  asection *section = NULL;
+
+  SYMPY_REQUIRE_VALID (self, section);
+
+  result = PyString_FromString (section->name);
+
+  return result;
+}
+
+static PyObject *
+secpy_get_flags (PyObject *self, void *closure)
+{
+  asection *section = NULL;
+
+  SYMPY_REQUIRE_VALID (self, section);
+
+  return PyInt_FromLong (section->flags);
+}
+
+static PyObject *
+secpy_get_objfile (PyObject *self, void *closure)
+{
+  section_object *obj = (section_object *)self;
+
+  if (! PyObject_TypeCheck (self, &section_object_type))
+    return NULL;
+
+  return objfile_to_objfile_object (obj->objfile);
+}
+
+static PyObject *
+secpy_get_name (PyObject *self, void *closure)
+{
+  asection *section = NULL;
+
+  SYMPY_REQUIRE_VALID (self, section);
+
+  return PyString_FromString (section->name);
+}
+
+static PyObject *
+secpy_get_id (PyObject *self, void *closure)
+{
+  asection *section = NULL;
+
+  SYMPY_REQUIRE_VALID (self, section);
+
+  return PyInt_FromLong (section->id);
+}
+
+#define secpy_return_string(self, val)		\
+({						\
+  asection *section = NULL;			\
+  SYMPY_REQUIRE_VALID (self, section);		\
+  PyString_FromString (val);		\
+})
+
+#define secpy_return_longlong(self, val)	\
+({						\
+  asection *section = NULL;			\
+  SYMPY_REQUIRE_VALID (self, section);		\
+  PyLong_FromUnsignedLongLong (val);	\
+})
+
+static PyObject *
+secpy_get_vma (PyObject *self, void *closure)
+{
+  return secpy_return_longlong(self, section->vma);
+}
+
+static PyObject *
+secpy_get_lma (PyObject *self, void *closure)
+{
+  return secpy_return_longlong(self, section->lma);
+}
+
+static PyObject *
+secpy_get_size (PyObject *self, void *closure)
+{
+  return secpy_return_longlong(self, section->size);
+}
+
+static PyObject *
+secpy_get_rawsize (PyObject *self, void *closure)
+{
+  return secpy_return_longlong(self, section->rawsize);
+}
+
+static PyObject *
+secpy_get_compressed_size (PyObject *self, void *closure)
+{
+  return secpy_return_longlong(self, section->compressed_size);
+}
+
+static PyObject *
+secpy_get_print_name (PyObject *self, void *closure)
+{
+  return secpy_str (self);
+}
+
+static PyObject *
+secpy_is_compressed (PyObject *self, void *closure)
+{
+  asection *section = NULL;
+
+  SYMPY_REQUIRE_VALID (self, section);
+
+  return PyBool_FromLong (section->compress_status == 1);
+}
+
+/* Given a section, and a section_object that has previously been
+   allocated and initialized, populate the section_object with the
+   asection data.  Also, register the section_object life-cycle
+   with the life-cycle of the object file associated with this
+   section, if needed.  */
+static void
+set_section (section_object *obj, asection *section, struct objfile *objfile)
+{
+  obj->section = section;
+  obj->prev = NULL;
+  obj->objfile = objfile;
+  obj->next = objfile_data (obj->objfile, secpy_objfile_data_key);
+
+  if (obj->next)
+    obj->next->prev = obj;
+
+  set_objfile_data (obj->objfile, secpy_objfile_data_key, obj);
+}
+
+/* Create a new section object (gdb.Section) that encapsulates the struct
+   section object from GDB.  */
+PyObject *
+section_to_section_object (asection *section, struct objfile *objfile)
+{
+  section_object *sec_obj;
+
+  sec_obj = PyObject_New (section_object, &section_object_type);
+  if (sec_obj) {
+    set_section (sec_obj, section, objfile);
+  }
+
+  return (PyObject *) sec_obj;
+}
+
+/* Return the section that is wrapped by this section object.  */
+asection *
+section_object_to_section (PyObject *obj)
+{
+  if (! PyObject_TypeCheck (obj, &section_object_type))
+    return NULL;
+  return ((section_object *) obj)->section;
+}
+
+static void
+secpy_dealloc (PyObject *obj)
+{
+  section_object *section_obj = (section_object *) obj;
+
+  if (section_obj->prev)
+    section_obj->prev->next = section_obj->next;
+  else if (section_obj->objfile)
+    {
+      set_objfile_data (section_obj->objfile,
+			secpy_objfile_data_key, section_obj->next);
+    }
+  if (section_obj->next)
+    section_obj->next->prev = section_obj->prev;
+  section_obj->section = NULL;
+}
+
+static PyObject *
+secpy_is_valid (PyObject *self, PyObject *args)
+{
+  asection *section = NULL;
+
+  section = section_object_to_section (self);
+  if (section == NULL)
+    Py_RETURN_FALSE;
+
+  Py_RETURN_TRUE;
+}
+
+/* This function is called when an objfile is about to be freed.
+   Invalidate the section as further actions on the section would result
+   in bad data.  All access to obj->section should be gated by
+   SYMPY_REQUIRE_VALID which will raise an exception on invalid
+   sections.  */
+static void
+del_objfile_sections (struct objfile *objfile, void *datum)
+{
+  section_object *obj = datum;
+  while (obj)
+    {
+      section_object *next = obj->next;
+
+      obj->section = NULL;
+      obj->next = NULL;
+      obj->prev = NULL;
+
+      obj = next;
+    }
+}
+
+int
+gdbpy_initialize_sections (void)
+{
+  if (PyType_Ready (&section_object_type) < 0)
+    return -1;
+
+  /* Register an objfile "free" callback so we can properly
+     invalidate section when an object file that is about to be
+     deleted.  */
+  secpy_objfile_data_key
+    = register_objfile_data_with_cleanup (NULL, del_objfile_sections);
+
+  if (PyModule_AddIntConstant (gdb_module, "SEC_NO_FLAGS", SEC_NO_FLAGS) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_ALLOC", SEC_ALLOC) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_LOAD", SEC_LOAD) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_RELOC", SEC_RELOC) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_READONLY", SEC_READONLY) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_CODE", SEC_CODE) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_DATA", SEC_DATA) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_ROM", SEC_ROM) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_CONSTRUCTOR",
+				  SEC_CONSTRUCTOR) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_HAS_CONTENTS",
+				  SEC_HAS_CONTENTS) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_NEVER_LOAD",
+				  SEC_NEVER_LOAD) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_THREAD_LOCAL",
+				  SEC_THREAD_LOCAL) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_HAS_GOT_REF",
+				  SEC_HAS_GOT_REF) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_IS_COMMON",
+				  SEC_IS_COMMON) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_DEBUGGING",
+				  SEC_DEBUGGING) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_IN_MEMORY",
+				  SEC_IN_MEMORY) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_EXCLUDE", SEC_EXCLUDE) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_SORT_ENTRIES",
+				  SEC_SORT_ENTRIES) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_LINK_ONCE",
+				  SEC_LINK_ONCE) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_LINK_DUPLICATES",
+				  SEC_LINK_DUPLICATES) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_LINK_DUPLICATES_DISCARD",
+				  SEC_LINK_DUPLICATES_DISCARD) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_LINK_DUPLICATES_ONE_ONLY",
+				  SEC_LINK_DUPLICATES_ONE_ONLY) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_LINK_DUPLICATES_SAME_SIZE",
+				  SEC_LINK_DUPLICATES_SAME_SIZE) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_LINKER_CREATED",
+				  SEC_LINKER_CREATED) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_KEEP", SEC_KEEP) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_SMALL_DATA",
+				  SEC_SMALL_DATA) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_MERGE", SEC_MERGE) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_STRNGS", SEC_STRINGS) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_GROUP", SEC_GROUP) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_COFF_SHARED_LIBRARY",
+				  SEC_COFF_SHARED_LIBRARY) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_ELF_REVERSE_COPY",
+				  SEC_ELF_REVERSE_COPY) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_COFF_SHARED",
+				  SEC_COFF_SHARED) < 0
+      || PyModule_AddIntConstant (gdb_module, "SEC_COFF_NOREAD",
+				  SEC_COFF_NOREAD) < 0)
+    return -1;
+
+  return gdb_pymodule_addobject (gdb_module, "Section",
+				 (PyObject *) &section_object_type);
+}
+
+\f
+
+static PyGetSetDef section_object_getset[] = {
+  { "flags", secpy_get_flags, NULL,
+    "Flags of the section.", NULL },
+  { "objfile", secpy_get_objfile, NULL,
+    "Object file in which the section appears.", NULL },
+  { "name", secpy_get_name, NULL,
+    "Name of the section, as it appears in the source code.", NULL },
+  { "size", secpy_get_size, NULL, "Size of the section.", NULL },
+  { "compressed_size", secpy_get_compressed_size, NULL,
+    "Compressed size of the section.", NULL },
+  { "rawsize", secpy_get_rawsize, NULL,
+    "Size of the section on disk.", NULL },
+  { "id", secpy_get_id, NULL,
+    "Sequence number of the section.", NULL },
+  { "print_name", secpy_get_print_name, NULL,
+    "Name of the section in a form suitable for output.\n\
+This is either name or linkage_name, depending on whether the user asked GDB\n\
+to display demangled or mangled names.", NULL },
+  { "vma", secpy_get_vma, NULL,
+    "Virtual memory address of the section at runtime." },
+  { "lma", secpy_get_lma, NULL,
+    "Load memory address of the section." },
+  { "is_compressed", secpy_is_compressed, NULL,
+    "True if the section is compressed." },
+  { NULL }  /* Sentinel */
+};
+
+static PyMethodDef section_object_methods[] = {
+  { "is_valid", secpy_is_valid, METH_NOARGS,
+    "is_valid () -> Boolean.\n\
+Return true if this section is valid, false if not." },
+  {NULL}  /* Sentinel */
+};
+
+PyTypeObject section_object_type = {
+  PyVarObject_HEAD_INIT (NULL, 0)
+  "gdb.Section",		  /*tp_name*/
+  sizeof (section_object),	  /*tp_basicsize*/
+  0,				  /*tp_itemsize*/
+  secpy_dealloc,		  /*tp_dealloc*/
+  0,				  /*tp_print*/
+  0,				  /*tp_getattr*/
+  0,				  /*tp_setattr*/
+  0,				  /*tp_compare*/
+  0,				  /*tp_repr*/
+  0,				  /*tp_as_number*/
+  0,				  /*tp_as_sequence*/
+  0,				  /*tp_as_mapping*/
+  0,				  /*tp_hash */
+  0,				  /*tp_call*/
+  secpy_str,			  /*tp_str*/
+  0,				  /*tp_getattro*/
+  0,				  /*tp_setattro*/
+  0,				  /*tp_as_buffer*/
+  Py_TPFLAGS_DEFAULT,		  /*tp_flags*/
+  "GDB section object",		  /*tp_doc */
+  0,				  /*tp_traverse */
+  0,				  /*tp_clear */
+  0,				  /*tp_richcompare */
+  0,				  /*tp_weaklistoffset */
+  0,				  /*tp_iter */
+  0,				  /*tp_iternext */
+  section_object_methods,	  /*tp_methods */
+  0,				  /*tp_members */
+  section_object_getset		  /*tp_getset */
+};
diff --git a/gdb/python/py-symbol.c b/gdb/python/py-symbol.c
index 4306f61..1aa5477 100644
--- a/gdb/python/py-symbol.c
+++ b/gdb/python/py-symbol.c
@@ -239,6 +239,28 @@ sympy_is_valid (PyObject *self, PyObject *args)
   Py_RETURN_TRUE;
 }
 
+static PyObject *
+sympy_section (PyObject *self, void *closure)
+{
+  struct symbol *symbol = NULL;
+  PyObject *section_obj;
+  struct obj_section *section;
+
+  SYMPY_REQUIRE_VALID (self, symbol);
+
+  section = SYMBOL_OBJ_SECTION(symbol_objfile(symbol), symbol);
+
+  if (section) {
+    section_obj = section_to_section_object(section->the_bfd_section,
+                                            symbol_objfile(symbol));
+    if (section_obj)
+      return section_obj;
+  }
+
+  Py_INCREF (Py_None);
+  return Py_None;
+}
+
 /* Implementation of gdb.Symbol.value (self[, frame]) -> gdb.Value.  Returns
    the value of the symbol, or an error in various circumstances.  */
 
@@ -378,14 +400,26 @@ gdbpy_lookup_symbol (PyObject *self, PyObject *args, PyObject *kw)
 
   if (block_obj)
     block = block_object_to_block (block_obj);
-  else
+  TRY
+    {
+      symbol = lookup_symbol (name, block, domain, &is_a_field_of_this);
+    }
+  CATCH (except, RETURN_MASK_ALL)
+    {
+      GDB_PY_HANDLE_EXCEPTION (except);
+    }
+  END_CATCH
+
+  if (!block)
     {
       struct frame_info *selected_frame;
 
       TRY
 	{
-	  selected_frame = get_selected_frame (_("No frame selected."));
-	  block = get_frame_block (selected_frame, NULL);
+	  if (symbol && symbol_read_needs_frame(symbol)) {
+	    selected_frame = get_selected_frame (_("No frame selected."));
+	    block = get_frame_block (selected_frame, NULL);
+	  }
 	}
       CATCH (except, RETURN_MASK_ALL)
 	{
@@ -394,16 +428,6 @@ gdbpy_lookup_symbol (PyObject *self, PyObject *args, PyObject *kw)
       END_CATCH
     }
 
-  TRY
-    {
-      symbol = lookup_symbol (name, block, domain, &is_a_field_of_this);
-    }
-  CATCH (except, RETURN_MASK_ALL)
-    {
-      GDB_PY_HANDLE_EXCEPTION (except);
-    }
-  END_CATCH
-
   ret_tuple = PyTuple_New (2);
   if (!ret_tuple)
     return NULL;
@@ -583,6 +607,8 @@ to display demangled or mangled names.", NULL },
     "True if the symbol requires a frame for evaluation." },
   { "line", sympy_line, NULL,
     "The source line number at which the symbol was defined." },
+  { "section", sympy_section, NULL,
+    "Section of executable where symbol resides." },
   { NULL }  /* Sentinel */
 };
 
diff --git a/gdb/python/python-internal.h b/gdb/python/python-internal.h
index ee949b7..e8776f1 100644
--- a/gdb/python/python-internal.h
+++ b/gdb/python/python-internal.h
@@ -143,6 +143,8 @@ typedef int Py_ssize_t;
 #define PyEval_ReleaseLock()
 #endif
 
+#define gdb_py_long_from_pointer PyLong_FromLong
+
 /* Python supplies HAVE_LONG_LONG and some `long long' support when it
    is available.  These defines let us handle the differences more
    cleanly.  */
@@ -241,6 +243,10 @@ extern PyTypeObject block_object_type
     CPYCHECKER_TYPE_OBJECT_FOR_TYPEDEF("block_object");
 extern PyTypeObject symbol_object_type
     CPYCHECKER_TYPE_OBJECT_FOR_TYPEDEF ("symbol_object");
+extern PyTypeObject section_object_type;
+     CPYCHECKER_TYPE_OBJECT_FOR_TYPEDEF ("section_object");
+extern PyTypeObject objfile_object_type;
+     CPYCHECKER_TYPE_OBJECT_FOR_TYPEDEF ("objfile_object");
 extern PyTypeObject event_object_type
     CPYCHECKER_TYPE_OBJECT_FOR_TYPEDEF ("event_object");
 extern PyTypeObject stop_event_object_type
@@ -362,6 +368,8 @@ PyObject *gdbpy_frame_stop_reason_string (PyObject *, PyObject *);
 PyObject *gdbpy_lookup_symbol (PyObject *self, PyObject *args, PyObject *kw);
 PyObject *gdbpy_lookup_global_symbol (PyObject *self, PyObject *args,
 				      PyObject *kw);
+PyObject *gdbpy_lookup_minimal_symbol (PyObject *self, PyObject *args,
+				       PyObject *kw);
 PyObject *gdbpy_newest_frame (PyObject *self, PyObject *args);
 PyObject *gdbpy_selected_frame (PyObject *self, PyObject *args);
 PyObject *gdbpy_block_for_pc (PyObject *self, PyObject *args);
@@ -381,6 +389,7 @@ char *gdbpy_parse_command_name (const char *name,
 				struct cmd_list_element ***base_list,
 				struct cmd_list_element **start_list);
 
+PyObject *section_to_section_object (asection *sym, struct objfile *objf);
 PyObject *symtab_and_line_to_sal_object (struct symtab_and_line sal);
 PyObject *symtab_to_symtab_object (struct symtab *symtab);
 PyObject *symbol_to_symbol_object (struct symbol *sym);
@@ -414,6 +423,7 @@ PyObject *find_inferior_object (int pid);
 PyObject *inferior_to_inferior_object (struct inferior *inferior);
 
 const struct block *block_object_to_block (PyObject *obj);
+asection *section_object_to_section (PyObject *obj);
 struct symbol *symbol_object_to_symbol (PyObject *obj);
 struct value *value_object_to_value (PyObject *self);
 struct value *convert_value_from_python (PyObject *obj);
@@ -436,6 +446,10 @@ int gdbpy_initialize_commands (void)
   CPYCHECKER_NEGATIVE_RESULT_SETS_EXCEPTION;
 int gdbpy_initialize_symbols (void)
   CPYCHECKER_NEGATIVE_RESULT_SETS_EXCEPTION;
+int gdbpy_initialize_minsymbols (void)
+  CPYCHECKER_NEGATIVE_RESULT_SETS_EXCEPTION;
+int gdbpy_initialize_sections (void)
+  CPYCHECKER_NEGATIVE_RESULT_SETS_EXCEPTION;
 int gdbpy_initialize_symtabs (void)
   CPYCHECKER_NEGATIVE_RESULT_SETS_EXCEPTION;
 int gdbpy_initialize_blocks (void)
diff --git a/gdb/python/python.c b/gdb/python/python.c
index 4f88b0e..817ec25 100644
--- a/gdb/python/python.c
+++ b/gdb/python/python.c
@@ -1800,7 +1800,9 @@ message == an error message without a stack will be printed."),
       || gdbpy_initialize_frames () < 0
       || gdbpy_initialize_commands () < 0
       || gdbpy_initialize_symbols () < 0
+      || gdbpy_initialize_minsymbols () < 0
       || gdbpy_initialize_symtabs () < 0
+      || gdbpy_initialize_sections () < 0
       || gdbpy_initialize_blocks () < 0
       || gdbpy_initialize_functions () < 0
       || gdbpy_initialize_parameters () < 0
@@ -2025,7 +2027,10 @@ a boolean indicating if name is a field of the current implied argument\n\
     METH_VARARGS | METH_KEYWORDS,
     "lookup_global_symbol (name [, domain]) -> symbol\n\
 Return the symbol corresponding to the given name (or None)." },
-
+{ "lookup_minimal_symbol", (PyCFunction) gdbpy_lookup_minimal_symbol,
+    METH_VARARGS | METH_KEYWORDS,
+    "lookup_minimal_symbol (name) -> minsym\n\
+Return the symbol corresponding to the given name (or None)." },
   { "lookup_objfile", (PyCFunction) gdbpy_lookup_objfile,
     METH_VARARGS | METH_KEYWORDS,
     "lookup_objfile (name, [by_build_id]) -> objfile\n\
-- 
2.7.0

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Enable gdb to open Linux kernel dumps
  2016-01-31 21:45 Enable gdb to open Linux kernel dumps Ales Novak
                   ` (3 preceding siblings ...)
  2016-01-31 21:45 ` [PATCH 1/4] Create new target "kdump" which uses libkdumpfile: https://github.com/ptesarik/libkdumpfile to access contents of compressed kernel dump Ales Novak
@ 2016-02-01 11:27 ` Kieran Bingham
  2016-02-01 11:51   ` Kieran Bingham
  4 siblings, 1 reply; 31+ messages in thread
From: Kieran Bingham @ 2016-02-01 11:27 UTC (permalink / raw)
  To: Ales Novak, gdb-patches

Hi Ales,

I'm just checking out your tree now to try locally.

It sounds like there is a high level of cross over in our work, but I
believe our work can complement each other's if we work together.

On 31/01/16 21:44, Ales Novak wrote:
> Following patches are adding basic ability to access Linux kernel
> dumps using the libkdumpfile library. They're creating new target
> "kdump", so all one has to do is to provide appropriate debuginfo and
> then run "target kdump /path/to/vmcore".
>
> The tasks of the dumped kernel are mapped to threads in gdb. 
>
> Except for that, there's a code adding understanding of Linux SLAB
> memory allocator, which means we can tell for the given address to
> which SLAB does it belong, or list objects for give SLAB name - and
> more.
>
> Patches are against "gdb-7.10-release" (but will apply elsewhere). 
>
> Note: registers of task are fetched accordingly - either from the dump
> metadata (the active tasks) or from their stacks. It should be noted
> that as this mechanism varies amongst the kernel versions and
> configurations, my naive implementation currently covers only the
> dumps I encounter, handling of different kernel versions is to be
> added.
In the work that I am doing, I had expected this to be done in python
for exactly this reason. The kernel version specifics, (and architecture
specifics) can then live alongside their respective trees.
> In the near future, our plan is to remove the clumsy C-code handling
> this and reimplement it in Python - only the binding to certain gdb
> structures (e.g. thread, regcache) has to be added. A colleague of
> mine is already working on that.
This sounds exactly like the work I am doing right now.
Could you pass on my details to your colleague so we can discuss?

I recently made a posting on gdb@ suggesting the addition of a
gdb.Target object to work towards implementing this, and I have been
liasing with Jan Kiszka to manage the Linux/scripts/gdb/ integration.



> The github home of these patches is at:
>
> https://github.com/alesax/gdb-kdump/tree/for-next
>
> libkdumpfile lives at:
>
> https://github.com/ptesarik/libkdumpfile
>
> Fork adding the SLAB support lives at:
>
> https://github.com/tehcaster/gdb-kdump/tree/slab-support
>
>
Regards

Kieran Bingham

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Enable gdb to open Linux kernel dumps
  2016-02-01 11:27 ` Enable gdb to open Linux kernel dumps Kieran Bingham
@ 2016-02-01 11:51   ` Kieran Bingham
  2016-02-01 14:32     ` Ales Novak
  0 siblings, 1 reply; 31+ messages in thread
From: Kieran Bingham @ 2016-02-01 11:51 UTC (permalink / raw)
  To: Ales Novak, gdb-patches


On 01/02/16 11:27, Kieran Bingham wrote:
> Hi Ales,
> 
> I'm just checking out your tree now to try locally.
> 
> It sounds like there is a high level of cross over in our work, but I
> believe our work can complement each other's if we work together.
> 
> On 31/01/16 21:44, Ales Novak wrote:
>> Following patches are adding basic ability to access Linux kernel
>> dumps using the libkdumpfile library. They're creating new target
>> "kdump", so all one has to do is to provide appropriate debuginfo and
>> then run "target kdump /path/to/vmcore".
>>
>> The tasks of the dumped kernel are mapped to threads in gdb. 
>>
>> Except for that, there's a code adding understanding of Linux SLAB
>> memory allocator, which means we can tell for the given address to
>> which SLAB does it belong, or list objects for give SLAB name - and
>> more.
>>
>> Patches are against "gdb-7.10-release" (but will apply elsewhere). 
>>
>> Note: registers of task are fetched accordingly - either from the dump
>> metadata (the active tasks) or from their stacks. It should be noted
>> that as this mechanism varies amongst the kernel versions and
>> configurations, my naive implementation currently covers only the
>> dumps I encounter, handling of different kernel versions is to be
>> added.
> In the work that I am doing, I had expected this to be done in python
> for exactly this reason. The kernel version specifics, (and architecture
> specifics) can then live alongside their respective trees.
>> In the near future, our plan is to remove the clumsy C-code handling
>> this and reimplement it in Python - only the binding to certain gdb
>> structures (e.g. thread, regcache) has to be added. A colleague of
>> mine is already working on that.
> This sounds exactly like the work I am doing right now.
> Could you pass on my details to your colleague so we can discuss?

Aha, or is your colleague Andreas Arnez? I'm just about to reply to his
mail over on gbd@ next.



> 
> I recently made a posting on gdb@ suggesting the addition of a
> gdb.Target object to work towards implementing this, and I have been
> liasing with Jan Kiszka to manage the Linux/scripts/gdb/ integration.
> 
> 
> 
>> The github home of these patches is at:
>>
>> https://github.com/alesax/gdb-kdump/tree/for-next
>>
>> libkdumpfile lives at:
>>
>> https://github.com/ptesarik/libkdumpfile
>>
>> Fork adding the SLAB support lives at:
>>
>> https://github.com/tehcaster/gdb-kdump/tree/slab-support
>>
>>
> Regards
> 
> Kieran Bingham
> 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 2/4] Add Jeff Mahoney's py-crash patches.
  2016-01-31 21:45 ` [PATCH 2/4] Add Jeff Mahoney's py-crash patches Ales Novak
@ 2016-02-01 12:35   ` Kieran Bingham
  2016-02-01 22:23   ` Doug Evans
  1 sibling, 0 replies; 31+ messages in thread
From: Kieran Bingham @ 2016-02-01 12:35 UTC (permalink / raw)
  To: Ales Novak, gdb-patches

Are these identical to the ones available at:
https://github.com/jeffmahoney/py-crash? or have you made any modifications

Wouldn't it be better to keep Jeff's patches separate, and maintain his
authorship?

I can see these potentially being useful to my work, so I believe they 
would be good additions.
I'll likely pick the patches from his repository for now and set my work
on top.

--
Regards

Kieran

On 31/01/16 21:44, Ales Novak wrote:
> ---
>  gdb/Makefile.in              |  12 ++
>  gdb/python/py-minsymbol.c    | 353 +++++++++++++++++++++++++++++++++++++
>  gdb/python/py-objfile.c      |  29 +++-
>  gdb/python/py-section.c      | 401 +++++++++++++++++++++++++++++++++++++++++++
>  gdb/python/py-symbol.c       |  52 ++++--
>  gdb/python/python-internal.h |  14 ++
>  gdb/python/python.c          |   7 +-
>  7 files changed, 853 insertions(+), 15 deletions(-)
>  create mode 100644 gdb/python/py-minsymbol.c
>  create mode 100644 gdb/python/py-section.c
>
> diff --git a/gdb/Makefile.in b/gdb/Makefile.in
> index 3c7518a..751de4d 100644
> --- a/gdb/Makefile.in
> +++ b/gdb/Makefile.in
> @@ -398,11 +398,13 @@ SUBDIR_PYTHON_OBS = \
>  	py-infthread.o \
>  	py-lazy-string.o \
>  	py-linetable.o \
> +	py-minsymbol.o \
>  	py-newobjfileevent.o \
>  	py-objfile.o \
>  	py-param.o \
>  	py-prettyprint.o \
>  	py-progspace.o \
> +	py-section.o \
>  	py-signalevent.o \
>  	py-stopevent.o \
>  	py-symbol.o \
> @@ -438,11 +440,13 @@ SUBDIR_PYTHON_SRCS = \
>  	python/py-infthread.c \
>  	python/py-lazy-string.c \
>  	python/py-linetable.c \
> +	python/py-minsymbol.c \
>  	python/py-newobjfileevent.c \
>  	python/py-objfile.c \
>  	python/py-param.c \
>  	python/py-prettyprint.c \
>  	python/py-progspace.c \
> +	python/py-section.c \
>  	python/py-signalevent.c \
>  	python/py-stopevent.c \
>  	python/py-symbol.c \
> @@ -2607,6 +2611,10 @@ py-linetable.o: $(srcdir)/python/py-linetable.c
>  	$(COMPILE) $(PYTHON_CFLAGS) $(srcdir)/python/py-linetable.c
>  	$(POSTCOMPILE)
>  
> +py-minsymbol.o: $(srcdir)/python/py-minsymbol.c
> +	$(COMPILE) $(PYTHON_CFLAGS) $(srcdir)/python/py-minsymbol.c
> +	$(POSTCOMPILE)
> +
>  py-newobjfileevent.o: $(srcdir)/python/py-newobjfileevent.c
>  	$(COMPILE) $(PYTHON_CFLAGS) $(srcdir)/python/py-newobjfileevent.c
>  	$(POSTCOMPILE)
> @@ -2627,6 +2635,10 @@ py-progspace.o: $(srcdir)/python/py-progspace.c
>  	$(COMPILE) $(PYTHON_CFLAGS) $(srcdir)/python/py-progspace.c
>  	$(POSTCOMPILE)
>  
> +py-section.o: $(srcdir)/python/py-section.c
> +	$(COMPILE) $(PYTHON_CFLAGS) $(srcdir)/python/py-section.c
> +	$(POSTCOMPILE)
> +
>  py-signalevent.o: $(srcdir)/python/py-signalevent.c
>  	$(COMPILE) $(PYTHON_CFLAGS) $(srcdir)/python/py-signalevent.c
>  	$(POSTCOMPILE)
> diff --git a/gdb/python/py-minsymbol.c b/gdb/python/py-minsymbol.c
> new file mode 100644
> index 0000000..efff59da
> --- /dev/null
> +++ b/gdb/python/py-minsymbol.c
> @@ -0,0 +1,353 @@
> +/* Python interface to minsymbols.
> +
> +   Copyright (C) 2008-2013 Free Software Foundation, Inc.
> +
> +   This file is part of GDB.
> +
> +   This program is free software; you can redistribute it and/or modify
> +   it under the terms of the GNU General Public License as published by
> +   the Free Software Foundation; either version 3 of the License, or
> +   (at your option) any later version.
> +
> +   This program is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +   GNU General Public License for more details.
> +
> +   You should have received a copy of the GNU General Public License
> +   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
> +
> +#include "defs.h"
> +#include "block.h"
> +#include "exceptions.h"
> +#include "frame.h"
> +#include "symtab.h"
> +#include "python-internal.h"
> +#include "objfiles.h"
> +#include "value.h"
> +
> +extern PyTypeObject minsym_object_type;
> +
> +typedef struct msympy_symbol_object {
> +  PyObject_HEAD
> +  /* The GDB minimal_symbol structure this object is wrapping.  */
> +  struct minimal_symbol *minsym;
> +
> +  struct type *type;
> +  /* A symbol object is associated with an objfile, so keep track with
> +     doubly-linked list, rooted in the objfile.  This lets us
> +     invalidate the underlying struct minimal_symbol when the objfile is
> +     deleted.  */
> +  struct msympy_symbol_object *prev;
> +  struct msympy_symbol_object *next;
> +} minsym_object;
> +
> +PyObject *minsym_to_minsym_object (struct minimal_symbol *minsym);
> +struct minimal_symbol *minsym_object_to_minsym (PyObject *obj);
> +/* Require a valid symbol.  All access to minsym_object->symbol should be
> +   gated by this call.  */
> +#define MSYMPY_REQUIRE_VALID(minsym_obj, minsym)	\
> +  do {							\
> +    minsym = minsym_object_to_minsym (minsym_obj);	\
> +    if (minsym == NULL)				\
> +      {							\
> +	PyErr_SetString (PyExc_RuntimeError,		\
> +			 _("MiniSymbol is invalid."));	\
> +	return NULL;					\
> +      }							\
> +  } while (0)
> +
> +static PyObject *
> +msympy_str (PyObject *self)
> +{
> +  PyObject *result;
> +  struct minimal_symbol *minsym = NULL;
> +
> +  MSYMPY_REQUIRE_VALID (self, minsym);
> +
> +  result = PyString_FromString (MSYMBOL_PRINT_NAME (minsym));
> +
> +  return result;
> +}
> +
> +static PyObject *
> +msympy_get_name (PyObject *self, void *closure)
> +{
> +  struct minimal_symbol *minsym = NULL;
> +
> +  MSYMPY_REQUIRE_VALID (self, minsym);
> +
> +  return PyString_FromString (MSYMBOL_NATURAL_NAME (minsym));
> +}
> +
> +static PyObject *
> +msympy_get_file_name (PyObject *self, void *closure)
> +{
> +  struct minimal_symbol *minsym = NULL;
> +
> +  MSYMPY_REQUIRE_VALID (self, minsym);
> +
> +  return PyString_FromString (minsym->filename);
> +}
> +
> +static PyObject *
> +msympy_get_linkage_name (PyObject *self, void *closure)
> +{
> +  struct minimal_symbol *minsym = NULL;
> +
> +  MSYMPY_REQUIRE_VALID (self, minsym);
> +
> +  return PyString_FromString (MSYMBOL_LINKAGE_NAME (minsym));
> +}
> +
> +static PyObject *
> +msympy_get_print_name (PyObject *self, void *closure)
> +{
> +  struct minimal_symbol *minsym = NULL;
> +
> +  MSYMPY_REQUIRE_VALID (self, minsym);
> +
> +  return msympy_str (self);
> +}
> +
> +static PyObject *
> +msympy_is_valid (PyObject *self, PyObject *args)
> +{
> +  struct minimal_symbol *minsym = NULL;
> +
> +  minsym = minsym_object_to_minsym (self);
> +  if (minsym == NULL)
> +    Py_RETURN_FALSE;
> +
> +  Py_RETURN_TRUE;
> +}
> +
> +/* Implementation of gdb.MiniSymbol.value (self) -> gdb.Value.  Returns
> +   the value of the symbol, or an error in various circumstances.  */
> +
> +static PyObject *
> +msympy_value (PyObject *self, PyObject *args)
> +{
> +  minsym_object *minsym_obj = (minsym_object *)self;
> +  struct minimal_symbol *minsym = NULL;
> +  struct value *value = NULL;
> +  volatile struct gdb_exception except;
> +
> +  if (!PyArg_ParseTuple (args, ""))
> +    return NULL;
> +
> +  MSYMPY_REQUIRE_VALID (self, minsym);
> +  TRY
> +    {
> +      value = value_from_ulongest(minsym_obj->type,
> +				  MSYMBOL_VALUE_RAW_ADDRESS(minsym));
> +      if (value)
> +	set_value_address(value, MSYMBOL_VALUE_RAW_ADDRESS(minsym));
> +    }CATCH (except, RETURN_MASK_ALL) {
> +	GDB_PY_HANDLE_EXCEPTION (except);
> +    } END_CATCH
> +  
> +
> +  return value_to_value_object (value);
> +}
> +
> +/* Given a symbol, and a minsym_object that has previously been
> +   allocated and initialized, populate the minsym_object with the
> +   struct minimal_symbol data.  Also, register the minsym_object life-cycle
> +   with the life-cycle of the object file associated with this
> +   symbol, if needed.  */
> +static void
> +set_symbol (minsym_object *obj, struct minimal_symbol *minsym)
> +{
> +  obj->minsym = minsym;
> +  switch (minsym->type) {
> +  case mst_text:
> +  case mst_solib_trampoline:
> +  case mst_file_text:
> +  case mst_text_gnu_ifunc:
> +  case mst_slot_got_plt:
> +    obj->type = builtin_type(python_gdbarch)->builtin_func_ptr;
> +    break;
> +
> +  case mst_data:
> +  case mst_abs:
> +  case mst_file_data:
> +  case mst_file_bss:
> +    obj->type = builtin_type(python_gdbarch)->builtin_data_ptr;
> +    break;
> +
> +  case mst_unknown:
> +  default:
> +    obj->type = builtin_type(python_gdbarch)->builtin_void;
> +    break;
> +  }
> +
> +  obj->prev = NULL;
> +  obj->next = NULL;
> +}
> +
> +/* Create a new symbol object (gdb.MiniSymbol) that encapsulates the struct
> +   symbol object from GDB.  */
> +PyObject *
> +minsym_to_minsym_object (struct minimal_symbol *minsym)
> +{
> +  minsym_object *msym_obj;
> +
> +  msym_obj = PyObject_New (minsym_object, &minsym_object_type);
> +  if (msym_obj)
> +    set_symbol (msym_obj, minsym);
> +
> +  return (PyObject *) msym_obj;
> +}
> +
> +/* Return the symbol that is wrapped by this symbol object.  */
> +struct minimal_symbol *
> +minsym_object_to_minsym (PyObject *obj)
> +{
> +  if (! PyObject_TypeCheck (obj, &minsym_object_type))
> +    return NULL;
> +  return ((minsym_object *) obj)->minsym;
> +}
> +
> +static void
> +msympy_dealloc (PyObject *obj)
> +{
> +  minsym_object *msym_obj = (minsym_object *) obj;
> +
> +  if (msym_obj->prev)
> +    msym_obj->prev->next = msym_obj->next;
> +  if (msym_obj->next)
> +    msym_obj->next->prev = msym_obj->prev;
> +  msym_obj->minsym = NULL;
> +}
> +
> +/* Implementation of
> +   gdb.lookup_minimal_symbol (name) -> symbol or None.  */
> +
> +PyObject *
> +gdbpy_lookup_minimal_symbol (PyObject *self, PyObject *args, PyObject *kw)
> +{
> +  int domain = VAR_DOMAIN;
> +  const char *name;
> +  static char *keywords[] = { "name", NULL };
> +  struct bound_minimal_symbol bound_minsym;
> +  struct minimal_symbol *minsym = NULL;
> +  PyObject *msym_obj;
> +  volatile struct gdb_exception except;
> +
> +  if (!PyArg_ParseTupleAndKeywords (args, kw, "s|", keywords, &name))
> +    return NULL;
> +
> +  TRY
> +    {
> +      bound_minsym = lookup_minimal_symbol (name, NULL, NULL);
> +    } CATCH (except, RETURN_MASK_ALL) {
> +  GDB_PY_HANDLE_EXCEPTION (except);
> +
> +  } END_CATCH
> +
> +  if (minsym)
> +    {
> +      msym_obj = minsym_to_minsym_object (bound_minsym.minsym);
> +      if (!msym_obj)
> +	return NULL;
> +    }
> +  else
> +    {
> +      msym_obj = Py_None;
> +      Py_INCREF (Py_None);
> +    }
> +
> +  return msym_obj;
> +}
> +
> +int
> +gdbpy_initialize_minsymbols (void)
> +{
> +  if (PyType_Ready (&minsym_object_type) < 0)
> +    return -1;
> +
> +  if (PyModule_AddIntConstant (gdb_module, "MINSYMBOL_TYPE_UNKNOWN",
> +			       mst_unknown) < 0
> +  || PyModule_AddIntConstant (gdb_module, "MINSYMBOL_TYPE_TEXT", mst_text) < 0
> +  || PyModule_AddIntConstant (gdb_module, "MINSYMBOL_TYPE_TEXT_GNU_IFUNC",
> +			      mst_text_gnu_ifunc) < 0
> +  || PyModule_AddIntConstant (gdb_module, "MINSYMBOL_TYPE_SLOT_GOT_PLT",
> +			      mst_slot_got_plt) < 0
> +  || PyModule_AddIntConstant (gdb_module, "MINSYMBOL_TYPE_DATA", mst_data) < 0
> +  || PyModule_AddIntConstant (gdb_module, "MINSYMBOL_TYPE_BSS", mst_bss) < 0
> +  || PyModule_AddIntConstant (gdb_module, "MINSYMBOL_TYPE_ABS", mst_abs) < 0
> +  || PyModule_AddIntConstant (gdb_module, "MINSYMBOL_TYPE_SOLIB_TRAMPOLINE",
> +			      mst_solib_trampoline) < 0
> +  || PyModule_AddIntConstant (gdb_module, "MINSYMBOL_TYPE_FILE_TEXT",
> +			      mst_file_text) < 0
> +  || PyModule_AddIntConstant (gdb_module, "MINSYMBOL_TYPE_FILE_DATA",
> +			      mst_file_data) < 0
> +  || PyModule_AddIntConstant (gdb_module, "MINSYMBOL_TYPE_FILE_BSS",
> +			      mst_file_bss) < 0)
> +    return -1;
> +
> +  return gdb_pymodule_addobject (gdb_module, "MiniSymbol",
> +				 (PyObject *) &minsym_object_type);
> +}
> +
> +\f
> +
> +static PyGetSetDef minsym_object_getset[] = {
> +  { "name", msympy_get_name, NULL,
> +    "Name of the symbol, as it appears in the source code.", NULL },
> +  { "linkage_name", msympy_get_linkage_name, NULL,
> +    "Name of the symbol, as used by the linker (i.e., may be mangled).",
> +    NULL },
> +  { "filename", msympy_get_file_name, NULL,
> +    "Name of source file the symbol is in. Only applies for mst_file_*.",
> +    NULL },
> +  { "print_name", msympy_get_print_name, NULL,
> +    "Name of the symbol in a form suitable for output.\n\
> +This is either name or linkage_name, depending on whether the user asked GDB\n\
> +to display demangled or mangled names.", NULL },
> +  { NULL }  /* Sentinel */
> +};
> +
> +static PyMethodDef minsym_object_methods[] = {
> +  { "is_valid", msympy_is_valid, METH_NOARGS,
> +    "is_valid () -> Boolean.\n\
> +Return true if this symbol is valid, false if not." },
> +  { "value", msympy_value, METH_VARARGS,
> +    "value ([frame]) -> gdb.Value\n\
> +Return the value of the symbol." },
> +  {NULL}  /* Sentinel */
> +};
> +
> +PyTypeObject minsym_object_type = {
> +  PyVarObject_HEAD_INIT (NULL, 0)
> +  "gdb.MiniSymbol",			  /*tp_name*/
> +  sizeof (minsym_object),	  /*tp_basicsize*/
> +  0,				  /*tp_itemsize*/
> +  msympy_dealloc,		  /*tp_dealloc*/
> +  0,				  /*tp_print*/
> +  0,				  /*tp_getattr*/
> +  0,				  /*tp_setattr*/
> +  0,				  /*tp_compare*/
> +  0,				  /*tp_repr*/
> +  0,				  /*tp_as_number*/
> +  0,				  /*tp_as_sequence*/
> +  0,				  /*tp_as_mapping*/
> +  0,				  /*tp_hash */
> +  0,				  /*tp_call*/
> +  msympy_str,			  /*tp_str*/
> +  0,				  /*tp_getattro*/
> +  0,				  /*tp_setattro*/
> +  0,				  /*tp_as_buffer*/
> +  Py_TPFLAGS_DEFAULT,		  /*tp_flags*/
> +  "GDB minimal symbol object",	  /*tp_doc */
> +  0,				  /*tp_traverse */
> +  0,				  /*tp_clear */
> +  0,				  /*tp_richcompare */
> +  0,				  /*tp_weaklistoffset */
> +  0,				  /*tp_iter */
> +  0,				  /*tp_iternext */
> +  minsym_object_methods,	  /*tp_methods */
> +  0,				  /*tp_members */
> +  minsym_object_getset		  /*tp_getset */
> +};
> diff --git a/gdb/python/py-objfile.c b/gdb/python/py-objfile.c
> index 5dc9ae6..498819b 100644
> --- a/gdb/python/py-objfile.c
> +++ b/gdb/python/py-objfile.c
> @@ -25,7 +25,7 @@
>  #include "build-id.h"
>  #include "symtab.h"
>  
> -typedef struct
> +typedef struct objfile_object
>  {
>    PyObject_HEAD
>  
> @@ -653,6 +653,31 @@ objfile_to_objfile_object (struct objfile *objfile)
>    return (PyObject *) object;
>  }
>  
> +static PyObject *
> +objfpy_get_sections (PyObject *self, void *closure)
> +{
> +  objfile_object *obj = (objfile_object *) self;
> +  PyObject *dict;
> +  asection *section = obj->objfile->sections->the_bfd_section;
> +
> +  dict = PyDict_New();
> +  if (!dict)
> +    return NULL;
> +
> +  while (section) {
> +    PyObject *sec = section_to_section_object(section, obj->objfile);
> +    if (!sec) {
> +      PyObject_Del(dict);
> +      return NULL;
> +    }
> +
> +    PyDict_SetItemString(dict, section->name, sec);
> +    section = section->next;
> +  }
> +
> +  return PyDictProxy_New(dict);
> +}
> +
>  int
>  gdbpy_initialize_objfile (void)
>  {
> @@ -707,6 +732,8 @@ static PyGetSetDef objfile_getset[] =
>      "Type printers.", NULL },
>    { "xmethods", objfpy_get_xmethods, NULL,
>      "Debug methods.", NULL },
> +  { "sections", objfpy_get_sections, NULL,
> +    "The sections that make up the objfile.", NULL },
>    { NULL }
>  };
>  
> diff --git a/gdb/python/py-section.c b/gdb/python/py-section.c
> new file mode 100644
> index 0000000..985c69c
> --- /dev/null
> +++ b/gdb/python/py-section.c
> @@ -0,0 +1,401 @@
> +/* Python interface to sections.
> +
> +   Copyright (C) 2008-2013 Free Software Foundation, Inc.
> +
> +   This file is part of GDB.
> +
> +   This program is free software; you can redistribute it and/or modify
> +   it under the terms of the GNU General Public License as published by
> +   the Free Software Foundation; either version 3 of the License, or
> +   (at your option) any later version.
> +
> +   This program is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +   GNU General Public License for more details.
> +
> +   You should have received a copy of the GNU General Public License
> +   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
> +
> +#include "defs.h"
> +#include "block.h"
> +#include "exceptions.h"
> +#include "frame.h"
> +#include "symtab.h"
> +#include "python-internal.h"
> +#include "objfiles.h"
> +
> +typedef struct secpy_section_object {
> +  PyObject_HEAD
> +  asection *section;
> +  struct objfile *objfile;
> +  /* The GDB section structure this object is wrapping.  */
> +  /* A section object is associated with an objfile, so keep track with
> +     doubly-linked list, rooted in the objfile.  This lets us
> +     invalidate the underlying section when the objfile is
> +     deleted.  */
> +  struct secpy_section_object *prev;
> +  struct secpy_section_object *next;
> +} section_object;
> +
> +/* Require a valid section.  All access to section_object->section should be
> +   gated by this call.  */
> +#define SYMPY_REQUIRE_VALID(section_obj, section)		\
> +  do {							\
> +    section = section_object_to_section (section_obj);	\
> +    if (section == NULL)					\
> +      {							\
> +	PyErr_SetString (PyExc_RuntimeError,		\
> +			 _("Section is invalid."));	\
> +	return NULL;					\
> +      }							\
> +  } while (0)
> +
> +static const struct objfile_data *secpy_objfile_data_key;
> +
> +static PyObject *
> +secpy_str (PyObject *self)
> +{
> +  PyObject *result;
> +  asection *section = NULL;
> +
> +  SYMPY_REQUIRE_VALID (self, section);
> +
> +  result = PyString_FromString (section->name);
> +
> +  return result;
> +}
> +
> +static PyObject *
> +secpy_get_flags (PyObject *self, void *closure)
> +{
> +  asection *section = NULL;
> +
> +  SYMPY_REQUIRE_VALID (self, section);
> +
> +  return PyInt_FromLong (section->flags);
> +}
> +
> +static PyObject *
> +secpy_get_objfile (PyObject *self, void *closure)
> +{
> +  section_object *obj = (section_object *)self;
> +
> +  if (! PyObject_TypeCheck (self, &section_object_type))
> +    return NULL;
> +
> +  return objfile_to_objfile_object (obj->objfile);
> +}
> +
> +static PyObject *
> +secpy_get_name (PyObject *self, void *closure)
> +{
> +  asection *section = NULL;
> +
> +  SYMPY_REQUIRE_VALID (self, section);
> +
> +  return PyString_FromString (section->name);
> +}
> +
> +static PyObject *
> +secpy_get_id (PyObject *self, void *closure)
> +{
> +  asection *section = NULL;
> +
> +  SYMPY_REQUIRE_VALID (self, section);
> +
> +  return PyInt_FromLong (section->id);
> +}
> +
> +#define secpy_return_string(self, val)		\
> +({						\
> +  asection *section = NULL;			\
> +  SYMPY_REQUIRE_VALID (self, section);		\
> +  PyString_FromString (val);		\
> +})
> +
> +#define secpy_return_longlong(self, val)	\
> +({						\
> +  asection *section = NULL;			\
> +  SYMPY_REQUIRE_VALID (self, section);		\
> +  PyLong_FromUnsignedLongLong (val);	\
> +})
> +
> +static PyObject *
> +secpy_get_vma (PyObject *self, void *closure)
> +{
> +  return secpy_return_longlong(self, section->vma);
> +}
> +
> +static PyObject *
> +secpy_get_lma (PyObject *self, void *closure)
> +{
> +  return secpy_return_longlong(self, section->lma);
> +}
> +
> +static PyObject *
> +secpy_get_size (PyObject *self, void *closure)
> +{
> +  return secpy_return_longlong(self, section->size);
> +}
> +
> +static PyObject *
> +secpy_get_rawsize (PyObject *self, void *closure)
> +{
> +  return secpy_return_longlong(self, section->rawsize);
> +}
> +
> +static PyObject *
> +secpy_get_compressed_size (PyObject *self, void *closure)
> +{
> +  return secpy_return_longlong(self, section->compressed_size);
> +}
> +
> +static PyObject *
> +secpy_get_print_name (PyObject *self, void *closure)
> +{
> +  return secpy_str (self);
> +}
> +
> +static PyObject *
> +secpy_is_compressed (PyObject *self, void *closure)
> +{
> +  asection *section = NULL;
> +
> +  SYMPY_REQUIRE_VALID (self, section);
> +
> +  return PyBool_FromLong (section->compress_status == 1);
> +}
> +
> +/* Given a section, and a section_object that has previously been
> +   allocated and initialized, populate the section_object with the
> +   asection data.  Also, register the section_object life-cycle
> +   with the life-cycle of the object file associated with this
> +   section, if needed.  */
> +static void
> +set_section (section_object *obj, asection *section, struct objfile *objfile)
> +{
> +  obj->section = section;
> +  obj->prev = NULL;
> +  obj->objfile = objfile;
> +  obj->next = objfile_data (obj->objfile, secpy_objfile_data_key);
> +
> +  if (obj->next)
> +    obj->next->prev = obj;
> +
> +  set_objfile_data (obj->objfile, secpy_objfile_data_key, obj);
> +}
> +
> +/* Create a new section object (gdb.Section) that encapsulates the struct
> +   section object from GDB.  */
> +PyObject *
> +section_to_section_object (asection *section, struct objfile *objfile)
> +{
> +  section_object *sec_obj;
> +
> +  sec_obj = PyObject_New (section_object, &section_object_type);
> +  if (sec_obj) {
> +    set_section (sec_obj, section, objfile);
> +  }
> +
> +  return (PyObject *) sec_obj;
> +}
> +
> +/* Return the section that is wrapped by this section object.  */
> +asection *
> +section_object_to_section (PyObject *obj)
> +{
> +  if (! PyObject_TypeCheck (obj, &section_object_type))
> +    return NULL;
> +  return ((section_object *) obj)->section;
> +}
> +
> +static void
> +secpy_dealloc (PyObject *obj)
> +{
> +  section_object *section_obj = (section_object *) obj;
> +
> +  if (section_obj->prev)
> +    section_obj->prev->next = section_obj->next;
> +  else if (section_obj->objfile)
> +    {
> +      set_objfile_data (section_obj->objfile,
> +			secpy_objfile_data_key, section_obj->next);
> +    }
> +  if (section_obj->next)
> +    section_obj->next->prev = section_obj->prev;
> +  section_obj->section = NULL;
> +}
> +
> +static PyObject *
> +secpy_is_valid (PyObject *self, PyObject *args)
> +{
> +  asection *section = NULL;
> +
> +  section = section_object_to_section (self);
> +  if (section == NULL)
> +    Py_RETURN_FALSE;
> +
> +  Py_RETURN_TRUE;
> +}
> +
> +/* This function is called when an objfile is about to be freed.
> +   Invalidate the section as further actions on the section would result
> +   in bad data.  All access to obj->section should be gated by
> +   SYMPY_REQUIRE_VALID which will raise an exception on invalid
> +   sections.  */
> +static void
> +del_objfile_sections (struct objfile *objfile, void *datum)
> +{
> +  section_object *obj = datum;
> +  while (obj)
> +    {
> +      section_object *next = obj->next;
> +
> +      obj->section = NULL;
> +      obj->next = NULL;
> +      obj->prev = NULL;
> +
> +      obj = next;
> +    }
> +}
> +
> +int
> +gdbpy_initialize_sections (void)
> +{
> +  if (PyType_Ready (&section_object_type) < 0)
> +    return -1;
> +
> +  /* Register an objfile "free" callback so we can properly
> +     invalidate section when an object file that is about to be
> +     deleted.  */
> +  secpy_objfile_data_key
> +    = register_objfile_data_with_cleanup (NULL, del_objfile_sections);
> +
> +  if (PyModule_AddIntConstant (gdb_module, "SEC_NO_FLAGS", SEC_NO_FLAGS) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_ALLOC", SEC_ALLOC) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_LOAD", SEC_LOAD) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_RELOC", SEC_RELOC) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_READONLY", SEC_READONLY) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_CODE", SEC_CODE) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_DATA", SEC_DATA) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_ROM", SEC_ROM) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_CONSTRUCTOR",
> +				  SEC_CONSTRUCTOR) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_HAS_CONTENTS",
> +				  SEC_HAS_CONTENTS) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_NEVER_LOAD",
> +				  SEC_NEVER_LOAD) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_THREAD_LOCAL",
> +				  SEC_THREAD_LOCAL) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_HAS_GOT_REF",
> +				  SEC_HAS_GOT_REF) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_IS_COMMON",
> +				  SEC_IS_COMMON) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_DEBUGGING",
> +				  SEC_DEBUGGING) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_IN_MEMORY",
> +				  SEC_IN_MEMORY) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_EXCLUDE", SEC_EXCLUDE) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_SORT_ENTRIES",
> +				  SEC_SORT_ENTRIES) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_LINK_ONCE",
> +				  SEC_LINK_ONCE) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_LINK_DUPLICATES",
> +				  SEC_LINK_DUPLICATES) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_LINK_DUPLICATES_DISCARD",
> +				  SEC_LINK_DUPLICATES_DISCARD) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_LINK_DUPLICATES_ONE_ONLY",
> +				  SEC_LINK_DUPLICATES_ONE_ONLY) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_LINK_DUPLICATES_SAME_SIZE",
> +				  SEC_LINK_DUPLICATES_SAME_SIZE) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_LINKER_CREATED",
> +				  SEC_LINKER_CREATED) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_KEEP", SEC_KEEP) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_SMALL_DATA",
> +				  SEC_SMALL_DATA) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_MERGE", SEC_MERGE) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_STRNGS", SEC_STRINGS) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_GROUP", SEC_GROUP) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_COFF_SHARED_LIBRARY",
> +				  SEC_COFF_SHARED_LIBRARY) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_ELF_REVERSE_COPY",
> +				  SEC_ELF_REVERSE_COPY) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_COFF_SHARED",
> +				  SEC_COFF_SHARED) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_COFF_NOREAD",
> +				  SEC_COFF_NOREAD) < 0)
> +    return -1;
> +
> +  return gdb_pymodule_addobject (gdb_module, "Section",
> +				 (PyObject *) &section_object_type);
> +}
> +
> +\f
> +
> +static PyGetSetDef section_object_getset[] = {
> +  { "flags", secpy_get_flags, NULL,
> +    "Flags of the section.", NULL },
> +  { "objfile", secpy_get_objfile, NULL,
> +    "Object file in which the section appears.", NULL },
> +  { "name", secpy_get_name, NULL,
> +    "Name of the section, as it appears in the source code.", NULL },
> +  { "size", secpy_get_size, NULL, "Size of the section.", NULL },
> +  { "compressed_size", secpy_get_compressed_size, NULL,
> +    "Compressed size of the section.", NULL },
> +  { "rawsize", secpy_get_rawsize, NULL,
> +    "Size of the section on disk.", NULL },
> +  { "id", secpy_get_id, NULL,
> +    "Sequence number of the section.", NULL },
> +  { "print_name", secpy_get_print_name, NULL,
> +    "Name of the section in a form suitable for output.\n\
> +This is either name or linkage_name, depending on whether the user asked GDB\n\
> +to display demangled or mangled names.", NULL },
> +  { "vma", secpy_get_vma, NULL,
> +    "Virtual memory address of the section at runtime." },
> +  { "lma", secpy_get_lma, NULL,
> +    "Load memory address of the section." },
> +  { "is_compressed", secpy_is_compressed, NULL,
> +    "True if the section is compressed." },
> +  { NULL }  /* Sentinel */
> +};
> +
> +static PyMethodDef section_object_methods[] = {
> +  { "is_valid", secpy_is_valid, METH_NOARGS,
> +    "is_valid () -> Boolean.\n\
> +Return true if this section is valid, false if not." },
> +  {NULL}  /* Sentinel */
> +};
> +
> +PyTypeObject section_object_type = {
> +  PyVarObject_HEAD_INIT (NULL, 0)
> +  "gdb.Section",		  /*tp_name*/
> +  sizeof (section_object),	  /*tp_basicsize*/
> +  0,				  /*tp_itemsize*/
> +  secpy_dealloc,		  /*tp_dealloc*/
> +  0,				  /*tp_print*/
> +  0,				  /*tp_getattr*/
> +  0,				  /*tp_setattr*/
> +  0,				  /*tp_compare*/
> +  0,				  /*tp_repr*/
> +  0,				  /*tp_as_number*/
> +  0,				  /*tp_as_sequence*/
> +  0,				  /*tp_as_mapping*/
> +  0,				  /*tp_hash */
> +  0,				  /*tp_call*/
> +  secpy_str,			  /*tp_str*/
> +  0,				  /*tp_getattro*/
> +  0,				  /*tp_setattro*/
> +  0,				  /*tp_as_buffer*/
> +  Py_TPFLAGS_DEFAULT,		  /*tp_flags*/
> +  "GDB section object",		  /*tp_doc */
> +  0,				  /*tp_traverse */
> +  0,				  /*tp_clear */
> +  0,				  /*tp_richcompare */
> +  0,				  /*tp_weaklistoffset */
> +  0,				  /*tp_iter */
> +  0,				  /*tp_iternext */
> +  section_object_methods,	  /*tp_methods */
> +  0,				  /*tp_members */
> +  section_object_getset		  /*tp_getset */
> +};
> diff --git a/gdb/python/py-symbol.c b/gdb/python/py-symbol.c
> index 4306f61..1aa5477 100644
> --- a/gdb/python/py-symbol.c
> +++ b/gdb/python/py-symbol.c
> @@ -239,6 +239,28 @@ sympy_is_valid (PyObject *self, PyObject *args)
>    Py_RETURN_TRUE;
>  }
>  
> +static PyObject *
> +sympy_section (PyObject *self, void *closure)
> +{
> +  struct symbol *symbol = NULL;
> +  PyObject *section_obj;
> +  struct obj_section *section;
> +
> +  SYMPY_REQUIRE_VALID (self, symbol);
> +
> +  section = SYMBOL_OBJ_SECTION(symbol_objfile(symbol), symbol);
> +
> +  if (section) {
> +    section_obj = section_to_section_object(section->the_bfd_section,
> +                                            symbol_objfile(symbol));
> +    if (section_obj)
> +      return section_obj;
> +  }
> +
> +  Py_INCREF (Py_None);
> +  return Py_None;
> +}
> +
>  /* Implementation of gdb.Symbol.value (self[, frame]) -> gdb.Value.  Returns
>     the value of the symbol, or an error in various circumstances.  */
>  
> @@ -378,14 +400,26 @@ gdbpy_lookup_symbol (PyObject *self, PyObject *args, PyObject *kw)
>  
>    if (block_obj)
>      block = block_object_to_block (block_obj);
> -  else
> +  TRY
> +    {
> +      symbol = lookup_symbol (name, block, domain, &is_a_field_of_this);
> +    }
> +  CATCH (except, RETURN_MASK_ALL)
> +    {
> +      GDB_PY_HANDLE_EXCEPTION (except);
> +    }
> +  END_CATCH
> +
> +  if (!block)
>      {
>        struct frame_info *selected_frame;
>  
>        TRY
>  	{
> -	  selected_frame = get_selected_frame (_("No frame selected."));
> -	  block = get_frame_block (selected_frame, NULL);
> +	  if (symbol && symbol_read_needs_frame(symbol)) {
> +	    selected_frame = get_selected_frame (_("No frame selected."));
> +	    block = get_frame_block (selected_frame, NULL);
> +	  }
>  	}
>        CATCH (except, RETURN_MASK_ALL)
>  	{
> @@ -394,16 +428,6 @@ gdbpy_lookup_symbol (PyObject *self, PyObject *args, PyObject *kw)
>        END_CATCH
>      }
>  
> -  TRY
> -    {
> -      symbol = lookup_symbol (name, block, domain, &is_a_field_of_this);
> -    }
> -  CATCH (except, RETURN_MASK_ALL)
> -    {
> -      GDB_PY_HANDLE_EXCEPTION (except);
> -    }
> -  END_CATCH
> -
>    ret_tuple = PyTuple_New (2);
>    if (!ret_tuple)
>      return NULL;
> @@ -583,6 +607,8 @@ to display demangled or mangled names.", NULL },
>      "True if the symbol requires a frame for evaluation." },
>    { "line", sympy_line, NULL,
>      "The source line number at which the symbol was defined." },
> +  { "section", sympy_section, NULL,
> +    "Section of executable where symbol resides." },
>    { NULL }  /* Sentinel */
>  };
>  
> diff --git a/gdb/python/python-internal.h b/gdb/python/python-internal.h
> index ee949b7..e8776f1 100644
> --- a/gdb/python/python-internal.h
> +++ b/gdb/python/python-internal.h
> @@ -143,6 +143,8 @@ typedef int Py_ssize_t;
>  #define PyEval_ReleaseLock()
>  #endif
>  
> +#define gdb_py_long_from_pointer PyLong_FromLong
> +
>  /* Python supplies HAVE_LONG_LONG and some `long long' support when it
>     is available.  These defines let us handle the differences more
>     cleanly.  */
> @@ -241,6 +243,10 @@ extern PyTypeObject block_object_type
>      CPYCHECKER_TYPE_OBJECT_FOR_TYPEDEF("block_object");
>  extern PyTypeObject symbol_object_type
>      CPYCHECKER_TYPE_OBJECT_FOR_TYPEDEF ("symbol_object");
> +extern PyTypeObject section_object_type;
> +     CPYCHECKER_TYPE_OBJECT_FOR_TYPEDEF ("section_object");
> +extern PyTypeObject objfile_object_type;
> +     CPYCHECKER_TYPE_OBJECT_FOR_TYPEDEF ("objfile_object");
>  extern PyTypeObject event_object_type
>      CPYCHECKER_TYPE_OBJECT_FOR_TYPEDEF ("event_object");
>  extern PyTypeObject stop_event_object_type
> @@ -362,6 +368,8 @@ PyObject *gdbpy_frame_stop_reason_string (PyObject *, PyObject *);
>  PyObject *gdbpy_lookup_symbol (PyObject *self, PyObject *args, PyObject *kw);
>  PyObject *gdbpy_lookup_global_symbol (PyObject *self, PyObject *args,
>  				      PyObject *kw);
> +PyObject *gdbpy_lookup_minimal_symbol (PyObject *self, PyObject *args,
> +				       PyObject *kw);
>  PyObject *gdbpy_newest_frame (PyObject *self, PyObject *args);
>  PyObject *gdbpy_selected_frame (PyObject *self, PyObject *args);
>  PyObject *gdbpy_block_for_pc (PyObject *self, PyObject *args);
> @@ -381,6 +389,7 @@ char *gdbpy_parse_command_name (const char *name,
>  				struct cmd_list_element ***base_list,
>  				struct cmd_list_element **start_list);
>  
> +PyObject *section_to_section_object (asection *sym, struct objfile *objf);
>  PyObject *symtab_and_line_to_sal_object (struct symtab_and_line sal);
>  PyObject *symtab_to_symtab_object (struct symtab *symtab);
>  PyObject *symbol_to_symbol_object (struct symbol *sym);
> @@ -414,6 +423,7 @@ PyObject *find_inferior_object (int pid);
>  PyObject *inferior_to_inferior_object (struct inferior *inferior);
>  
>  const struct block *block_object_to_block (PyObject *obj);
> +asection *section_object_to_section (PyObject *obj);
>  struct symbol *symbol_object_to_symbol (PyObject *obj);
>  struct value *value_object_to_value (PyObject *self);
>  struct value *convert_value_from_python (PyObject *obj);
> @@ -436,6 +446,10 @@ int gdbpy_initialize_commands (void)
>    CPYCHECKER_NEGATIVE_RESULT_SETS_EXCEPTION;
>  int gdbpy_initialize_symbols (void)
>    CPYCHECKER_NEGATIVE_RESULT_SETS_EXCEPTION;
> +int gdbpy_initialize_minsymbols (void)
> +  CPYCHECKER_NEGATIVE_RESULT_SETS_EXCEPTION;
> +int gdbpy_initialize_sections (void)
> +  CPYCHECKER_NEGATIVE_RESULT_SETS_EXCEPTION;
>  int gdbpy_initialize_symtabs (void)
>    CPYCHECKER_NEGATIVE_RESULT_SETS_EXCEPTION;
>  int gdbpy_initialize_blocks (void)
> diff --git a/gdb/python/python.c b/gdb/python/python.c
> index 4f88b0e..817ec25 100644
> --- a/gdb/python/python.c
> +++ b/gdb/python/python.c
> @@ -1800,7 +1800,9 @@ message == an error message without a stack will be printed."),
>        || gdbpy_initialize_frames () < 0
>        || gdbpy_initialize_commands () < 0
>        || gdbpy_initialize_symbols () < 0
> +      || gdbpy_initialize_minsymbols () < 0
>        || gdbpy_initialize_symtabs () < 0
> +      || gdbpy_initialize_sections () < 0
>        || gdbpy_initialize_blocks () < 0
>        || gdbpy_initialize_functions () < 0
>        || gdbpy_initialize_parameters () < 0
> @@ -2025,7 +2027,10 @@ a boolean indicating if name is a field of the current implied argument\n\
>      METH_VARARGS | METH_KEYWORDS,
>      "lookup_global_symbol (name [, domain]) -> symbol\n\
>  Return the symbol corresponding to the given name (or None)." },
> -
> +{ "lookup_minimal_symbol", (PyCFunction) gdbpy_lookup_minimal_symbol,
> +    METH_VARARGS | METH_KEYWORDS,
> +    "lookup_minimal_symbol (name) -> minsym\n\
> +Return the symbol corresponding to the given name (or None)." },
>    { "lookup_objfile", (PyCFunction) gdbpy_lookup_objfile,
>      METH_VARARGS | METH_KEYWORDS,
>      "lookup_objfile (name, [by_build_id]) -> objfile\n\

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/4] Add SLAB allocator understanding.
  2016-01-31 21:45 ` [PATCH 3/4] Add SLAB allocator understanding Ales Novak
@ 2016-02-01 13:21   ` Kieran Bingham
  2016-02-01 22:30     ` Doug Evans
  2016-02-02 10:04     ` Vlastimil Babka
  0 siblings, 2 replies; 31+ messages in thread
From: Kieran Bingham @ 2016-02-01 13:21 UTC (permalink / raw)
  To: Ales Novak, gdb-patches; +Cc: Vlastimil Babka, Jan Kiszka

This is interesting work!

I had been discussing how we might achieve managing this with Jan @
FOSDEM yesterday.

I believe a python implementation of this could be possible, and then
this code can live in the Kernel, and be split across architecture
specific layers where necessary to implement handling userspace
application boundaries from the Kernel Awareness.


I believe that if properly abstracted (which I think it looks like this
already will be), with kdump as a target layer, we can implement the
Kernel awareness layers above, so that they can be common to all of our
use case scenarios.

I have recently proposed creating a gdb.Target object, so that we can
layer the kernel specific code on top as a higher stratum layer. This
code can then live in the Kernel, and be version specific there, and
would then cooperate with the layers below, be that a live target over
JTAG, or a virtualised qemu/kvm, or a core dump file:

This way calling "(gdb) maintenance print target-stack" would look like:

The current target stack is:
  - Kernel Architecture Layer (specific implementations for ARM, ARMv8,
x86_64, i386... etc)
  - Kernel Awareness Layer (Common functionality, SLAB reader, Thread
awareness)
  - {remote (Remote serial target in gdb-specific protocol)}, or  -
{kdump , kdump interpretor layer}
  - exec (Local exec file)
  - None (None)

Please let me know about your thoughts on this, and how we can work
together.

--
Regards

Kieran


On 31/01/16 21:44, Ales Novak wrote:
> From: Vlastimil Babka <vbabka@suse.cz>
>
> ---
>  gdb/kdump.c | 1259 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++---
>  1 file changed, 1211 insertions(+), 48 deletions(-)
>
> diff --git a/gdb/kdump.c b/gdb/kdump.c
> index b7b0ef5..e231559 100644
> --- a/gdb/kdump.c
> +++ b/gdb/kdump.c
> @@ -58,6 +58,7 @@
>  #include <sys/types.h>
>  #include <sys/stat.h>
>  #include <unistd.h>
> +#include <hashtab.h>
>  
>  
>  #include <dirent.h>
> @@ -73,6 +74,7 @@ typedef unsigned long long offset;
>  #define F_UNKN_ENDIAN    4
>  
>  unsigned long long kt_int_value (void *buff);
> +unsigned long long kt_long_value (void *buff);
>  unsigned long long kt_ptr_value (void *buff);
>  
>  int kt_hlist_head_for_each_node (char *addr, int(*func)(void *,offset), void *data);
> @@ -97,12 +99,17 @@ static void core_close (struct target_ops *self);
>  
>  typedef unsigned long long offset;
>  
> +static int nr_node_ids = 1;
> +static int nr_cpu_ids = 1;
> +
>  #define KDUMP_TYPE const char *_name; int _size; int _offset; struct type *_origtype
>  #define GET_GDB_TYPE(typ) types. typ ._origtype
>  #define GET_TYPE_SIZE(typ) (TYPE_LENGTH(GET_GDB_TYPE(typ)))
>  #define MEMBER_OFFSET(type,member) types. type. member
> -#define KDUMP_TYPE_ALLOC(type) kdump_type_alloc(GET_GDB_TYPE(type))
> -#define KDUMP_TYPE_GET(type,off,where) kdump_type_get(GET_GDB_TYPE(type), off, 0, where)
> +#define KDUMP_TYPE_ALLOC(type) kdump_type_alloc(GET_GDB_TYPE(type), 0)
> +#define KDUMP_TYPE_ALLOC_EXTRA(type,extra) kdump_type_alloc(GET_GDB_TYPE(type),extra)
> +#define KDUMP_TYPE_GET(type,off,where) kdump_type_get(GET_GDB_TYPE(type), off, 0, where, 0)
> +#define KDUMP_TYPE_GET_EXTRA(type,off,where,extra) kdump_type_get(GET_GDB_TYPE(type), off, 0, where, extra)
>  #define KDUMP_TYPE_FREE(where) free(where)
>  #define SYMBOL(var,name) do { var = lookup_symbol(name, NULL, VAR_DOMAIN, NULL); if (! var) { fprintf(stderr, "Cannot lookup_symbol(" name ")\n"); goto error; } } while(0)
>  #define OFFSET(x) (types.offsets. x)
> @@ -112,12 +119,12 @@ typedef unsigned long long offset;
>  #define GET_REGISTER_OFFSET(reg) (MEMBER_OFFSET(user_regs_struct,reg)/GET_TYPE_SIZE(_voidp))
>  #define GET_REGISTER_OFFSET_pt(reg) (MEMBER_OFFSET(pt_regs,reg)/GET_TYPE_SIZE(_voidp))
>  
> -#define list_for_each(pos, head) \
> -	for (pos = kt_ptr_value(head); pos != (head); KDUMP_TYPE_GET(_voidp,pos,&pos)
>  
> -#define list_head_for_each(head,lhb, _nxt) for((_nxt = kt_ptr_value(lhb)), KDUMP_TYPE_GET(list_head, _nxt, lhb);\
> -	(_nxt = kt_ptr_value(lhb)) != head; \
> -	KDUMP_TYPE_GET(list_head, _nxt, lhb))
> +#define list_head_for_each(head, lhb, _nxt)				      \
> +	for(KDUMP_TYPE_GET(list_head, head, lhb), _nxt = kt_ptr_value(lhb),   \
> +					KDUMP_TYPE_GET(list_head, _nxt, lhb); \
> +		_nxt != head;						      \
> +		_nxt = kt_ptr_value(lhb), KDUMP_TYPE_GET(list_head, _nxt, lhb))
>  
>  enum x86_64_regs {
>  	reg_RAX = 0,
> @@ -184,6 +191,10 @@ struct {
>  
>  	struct {
>  		KDUMP_TYPE;
> +	} _long;
> +
> +	struct {
> +		KDUMP_TYPE;
>  	} _voidp;
>  
>  	struct {
> @@ -345,10 +356,54 @@ struct {
>  		offset *percpu_offsets;
>  	} offsets;
>  
> +	struct {
> +		KDUMP_TYPE;
> +		offset flags;
> +		offset lru;
> +		offset first_page;
> +	} page;
> +
> +	struct {
> +		KDUMP_TYPE;
> +		offset array;
> +		offset name;
> +		offset list;
> +		offset nodelists;
> +		offset num;
> +		offset buffer_size;
> +	} kmem_cache;
> +
> +	struct {
> +		KDUMP_TYPE;
> +		offset slabs_partial;
> +		offset slabs_full;
> +		offset slabs_free;
> +		offset shared;
> +		offset alien;
> +		offset free_objects;
> +	} kmem_list3;
> +
> +	struct {
> +		KDUMP_TYPE;
> +		offset avail;
> +		offset limit;
> +		offset entry;
> +	} array_cache;
> +
> +	struct {
> +		KDUMP_TYPE;
> +		offset list;
> +		offset inuse;
> +		offset free;
> +		offset s_mem;
> +	} slab;
> +
>  	struct cpuinfo *cpu;
>  	int ncpus;
>  } types;
>  
> +unsigned PG_tail, PG_slab;
> +
>  struct task_info {
>  	offset task_struct;
>  	offset sp;
> @@ -404,6 +459,21 @@ unsigned long long kt_int_value (void *buff)
>  	return val;
>  }
>  
> +unsigned long long kt_long_value (void *buff)
> +{
> +	unsigned long long val;
> +
> +	if (GET_TYPE_SIZE(_long) == 4) {
> +		val = *(int32_t*)buff;
> +		if (types.flags & F_BIG_ENDIAN) val = __bswap_32(val);
> +	} else {
> +		val = *(int64_t*)buff;
> +		if (types.flags & F_BIG_ENDIAN) val = __bswap_64(val);
> +	}
> +
> +	return val;
> +}
> +
>  unsigned long long kt_ptr_value (void *buff)
>  {
>  	unsigned long long val;
> @@ -417,6 +487,49 @@ unsigned long long kt_ptr_value (void *buff)
>  	}
>  	return val;
>  }
> +
> +static unsigned long long kt_ptr_value_off (offset addr)
> +{
> +	char buf[8];
> +	unsigned len = GET_TYPE_SIZE(_voidp);
> +
> +	if (target_read_raw_memory(addr, (void *)buf, len)) {
> +		warning(_("Cannot read target memory addr=%llx length=%u\n"),
> +								addr, len);
> +		return -1;
> +	}
> +
> +	return kt_ptr_value(buf);
> +}
> +
> +static unsigned long long kt_int_value_off (offset addr)
> +{
> +	char buf[8];
> +	unsigned len = GET_TYPE_SIZE(_int);
> +
> +	if (target_read_raw_memory(addr, (void *)buf, len)) {
> +		warning(_("Cannot read target memory addr=%llx length=%u\n"),
> +								addr, len);
> +		return -1;
> +	}
> +
> +	return kt_int_value(buf);
> +}
> +
> +char * kt_strndup (offset src, int n);
> +char * kt_strndup (offset src, int n)
> +{
> +	char *dest = NULL;
> +	int ret, errno;
> +
> +	ret = target_read_string(src, &dest, n, &errno);
> +
> +	if (errno)
> +		fprintf(stderr, "target_read_string errno: %d\n", errno);
> +
> +	return dest;
> +}
> +
>  static offset get_symbol_address(const char *sname);
>  static offset get_symbol_address(const char *sname)
>  {
> @@ -519,35 +632,55 @@ static int kdump_type_member_init (struct type *type, const char *name, offset *
>  {
>  	int i;
>  	struct field *f;
> +	int ret;
> +	enum type_code tcode;
> +	offset off;
> +
>  	f = TYPE_FIELDS(type);
> -	for (i = 0; i < TYPE_NFIELDS(type); i ++) {
> -		if (! strcmp(f->name, name)) {
> -			*poffset = (f->loc.physaddr >> 3);
> +	for (i = 0; i < TYPE_NFIELDS(type); i++, f++) {
> +		//printf("fieldname \'%s\'\n", f->name);
> +		off = (f->loc.physaddr >> 3);
> +		if (!strcmp(f->name, name)) {
> +			*poffset = off;
>  			return 0;
>  		}
> -		f++;
> +		if (strlen(f->name))
> +			continue;
> +		tcode = TYPE_CODE(f->type);
> +		if (tcode == TYPE_CODE_UNION || tcode == TYPE_CODE_STRUCT) {
> +			//printf("recursing into unnamed union/struct\n");
> +			ret = kdump_type_member_init(f->type, name, poffset);
> +			if (ret != -1) {
> +				*poffset += off;
> +				return ret;
> +			}
> +		}
>  	}
>  	return -1;
>  }
>  
> -static void *kdump_type_alloc(struct type *type)
> +static void *kdump_type_alloc(struct type *type, size_t extra_size)
>  {
>  	int allocated = 0;
>  	void *buff;
>  
>  	allocated = 1;
> -	buff = malloc(TYPE_LENGTH(type));
> +	buff = malloc(TYPE_LENGTH(type) + extra_size);
>  	if (buff == NULL) {
> -		warning(_("Cannot allocate memory of %d length\n"), (int)TYPE_LENGTH(type));
> +		warning(_("Cannot allocate memory of %u length + %lu extra\n"),
> +					TYPE_LENGTH(type), extra_size);
>  		return NULL;
>  	}
>  	return buff;
>  }
>  
> -static int kdump_type_get(struct type *type, offset addr, int pos, void *buff)
> +static int kdump_type_get(struct type *type, offset addr, int pos, void *buff,
> +							size_t extra_size)
>  {
> -	if (target_read_raw_memory(addr + (TYPE_LENGTH(type)*pos), buff, TYPE_LENGTH(type))) {
> -		warning(_("Cannot read target memory of %d length\n"), (int)TYPE_LENGTH(type));
> +	if (target_read_raw_memory(addr + (TYPE_LENGTH(type)*pos), buff,
> +					TYPE_LENGTH(type) + extra_size)) {
> +		warning(_("Cannot read target memory of %u length + %lu extra\n"),
> +					TYPE_LENGTH(type), extra_size);
>  		return 1;
>  	}
>  	return 0;
> @@ -568,7 +701,8 @@ int kdump_types_init(int flags)
>  	#define INIT_BASE_TYPE_(name,tname) if(kdump_type_init(&types. tname ._origtype, &types. tname ._size, #name, T_BASE)) { fprintf(stderr, "Cannot base find type \'%s\'", #name); break; }
>  	#define INIT_REF_TYPE(name) if(kdump_type_init(&types. name ._origtype, &types. name ._size, #name, T_REF)) { fprintf(stderr, "Cannot ref find type \'%s\'", #name); break; }
>  	#define INIT_REF_TYPE_(name,tname) if(kdump_type_init(&types. tname ._origtype, &types. tname ._size, #name, T_REF)) { fprintf(stderr, "Cannot ref find type \'%s\'", #name); break; }
> -	#define INIT_STRUCT_MEMBER(sname,mname) if(kdump_type_member_init(types. sname ._origtype, #mname, &types. sname . mname)) { break; }
> +	#define INIT_STRUCT_MEMBER(sname,mname) if(kdump_type_member_init(types. sname ._origtype, #mname, &types. sname . mname)) \
> +		{ fprintf(stderr, "Cannot find struct \'%s\' member \'%s\'", #sname, #mname); break; }
>  
>  	/** initialize member with different name than the containing one */
>  	#define INIT_STRUCT_MEMBER_(sname,mname,mmname) if(kdump_type_member_init(types. sname ._origtype, #mname, &types. sname . mmname)) { break; }
> @@ -576,8 +710,9 @@ int kdump_types_init(int flags)
>  	/** don't fail if the member is not present */
>  	#define INIT_STRUCT_MEMBER__(sname,mname) kdump_type_member_init(types. sname ._origtype, #mname, &types. sname . mname)
>  	do {
> -		INIT_BASE_TYPE_(int,_int);
> -		INIT_REF_TYPE_(void,_voidp);
> +		INIT_BASE_TYPE_(int,_int); 
> +		INIT_BASE_TYPE_(long,_long);
> +		INIT_REF_TYPE_(void,_voidp); 
>  
>  		INIT_STRUCT(list_head);
>  		INIT_STRUCT_MEMBER(list_head,prev);
> @@ -728,9 +863,43 @@ int kdump_types_init(int flags)
>  			INIT_STRUCT_MEMBER__(ppc_pt_regs, rx6);
>  			INIT_STRUCT_MEMBER__(ppc_pt_regs, rx7);
>  		}
> +		INIT_STRUCT(page);
> +		INIT_STRUCT_MEMBER(page, flags);
> +		INIT_STRUCT_MEMBER(page, lru);
> +		INIT_STRUCT_MEMBER(page, first_page);
> +
> +		INIT_STRUCT(kmem_cache);
> +		INIT_STRUCT_MEMBER(kmem_cache, name);
> +		INIT_STRUCT_MEMBER_(kmem_cache, next, list);
> +		INIT_STRUCT_MEMBER(kmem_cache, nodelists);
> +		INIT_STRUCT_MEMBER(kmem_cache, num);
> +		INIT_STRUCT_MEMBER(kmem_cache, array);
> +		INIT_STRUCT_MEMBER(kmem_cache, buffer_size);
> +
> +		INIT_STRUCT(kmem_list3);
> +		INIT_STRUCT_MEMBER(kmem_list3, slabs_partial);
> +		INIT_STRUCT_MEMBER(kmem_list3, slabs_full);
> +		INIT_STRUCT_MEMBER(kmem_list3, slabs_free);
> +		INIT_STRUCT_MEMBER(kmem_list3, shared);
> +		INIT_STRUCT_MEMBER(kmem_list3, alien);
> +		INIT_STRUCT_MEMBER(kmem_list3, free_objects);
> +
> +		INIT_STRUCT(array_cache);
> +		INIT_STRUCT_MEMBER(array_cache, avail);
> +		INIT_STRUCT_MEMBER(array_cache, limit);
> +		INIT_STRUCT_MEMBER(array_cache, entry);
> +
> +		INIT_STRUCT(slab);
> +		INIT_STRUCT_MEMBER(slab, list);
> +		INIT_STRUCT_MEMBER(slab, inuse);
> +		INIT_STRUCT_MEMBER(slab, free);
> +		INIT_STRUCT_MEMBER(slab, s_mem);
>  		ret = 0;
>  	} while(0);
>  
> +	PG_tail = get_symbol_value("PG_tail");
> +	PG_slab = get_symbol_value("PG_slab");
> +
>  	if (ret) {
>  		fprintf(stderr, "Cannot init types\n");
>  	}
> @@ -738,6 +907,148 @@ int kdump_types_init(int flags)
>  	return ret;
>  }
>  
> +struct list_iter {
> +	offset curr;
> +	offset prev;
> +	offset head;
> +	offset last;
> +	offset fast;
> +	int cont;
> +	int error;
> +};
> +
> +static void list_first_from(struct list_iter *iter, offset o_head)
> +{
> +	char b_head[GET_TYPE_SIZE(list_head)];
> +
> +	iter->fast = 0;
> +	iter->error = 0;
> +	iter->cont = 1;
> +
> +	if (KDUMP_TYPE_GET(list_head, o_head, b_head)) {
> +		warning(_("Could not read list_head %llx in list_first()\n"),
> +								o_head);
> +		iter->error = 1;
> +		iter->cont = 0;
> +		return;
> +	}
> +
> +	iter->curr = o_head;
> +	iter->last = kt_ptr_value(b_head + MEMBER_OFFSET(list_head, prev));
> +
> +	iter->head = o_head;
> +	iter->prev = iter->last;
> +}
> +
> +static void list_first(struct list_iter *iter, offset o_head)
> +{
> +	char b_head[GET_TYPE_SIZE(list_head)];
> +
> +	iter->fast = 0;
> +	iter->error = 0;
> +	iter->cont = 1;
> +
> +	if (KDUMP_TYPE_GET(list_head, o_head, b_head)) {
> +		warning(_("Could not read list_head %llx in list_first()\n"),
> +								o_head);
> +		iter->error = 1;
> +		iter->cont = 0;
> +		return;
> +	}
> +
> +	iter->curr = kt_ptr_value(b_head + MEMBER_OFFSET(list_head, next));
> +	iter->last = kt_ptr_value(b_head + MEMBER_OFFSET(list_head, prev));
> +
> +	/* Empty list */
> +	if (iter->curr == o_head) {
> +		if (iter->last != o_head) {
> +			warning(_("list_head %llx is empty, but prev points to %llx\n"),
> +							o_head,	iter->last);
> +			iter->error = 1;
> +		}
> +		iter->cont = 0;
> +		return;
> +	}
> +
> +	iter->head = o_head;
> +	iter->prev = o_head;
> +}
> +
> +static void list_next(struct list_iter *iter)
> +{
> +	char b_head[GET_TYPE_SIZE(list_head)];
> +	offset o_next, o_prev;
> +
> +	if (KDUMP_TYPE_GET(list_head, iter->curr, b_head)) {
> +		warning(_("Could not read list_head %llx in list_next()\n"),
> +								iter->curr);
> +		iter->error = 1;
> +		iter->cont = 0;
> +		return;
> +	}
> +
> +	o_next = kt_ptr_value(b_head + MEMBER_OFFSET(list_head, next));
> +	o_prev = kt_ptr_value(b_head + MEMBER_OFFSET(list_head, prev));
> +
> +	if (o_next == iter->head) {
> +		if (iter->curr != iter->last) {
> +			warning(_("list item %llx appears to be last, but list_head %llx ->prev points to %llx\n"),
> +						iter->curr, iter->head,
> +						iter->last);
> +			iter->error = 1;
> +		}
> +		iter->cont = 0;
> +		return;
> +	}
> +
> +	if (o_prev != iter->prev) {
> +		warning(_("list item %llx ->next is %llx but the latter's ->prev is %llx\n"),
> +					iter->prev, iter->curr, o_prev);
> +		iter->error = 1;
> +		/*
> +		 * broken ->prev link means that there might be cycle that
> +		 * does not include head; start detecting cycles
> +		 */
> +		if (!iter->fast)
> +			iter->fast = iter->curr;
> +	}
> +
> +	/*
> +	 * Are we detecting cycles? If so, advance iter->fast to
> +	 * iter->curr->next->next and compare iter->curr to both next's
> +	 * (Floyd's Tortoise and Hare algorithm)
> +	 *
> +	 */
> +	if (iter->fast) {
> +		int i = 2;
> +		while(i--) {
> +			/*
> +			 *  Simply ignore failure to read fast->next, the next
> +			 *  call to list_next() will find out anyway.
> +			 */
> +			if (KDUMP_TYPE_GET(list_head, iter->fast, b_head))
> +				break;
> +			iter->fast = kt_ptr_value(
> +				b_head + MEMBER_OFFSET(list_head, next));
> +			if (iter->curr == iter->fast) {
> +				warning(_("list_next() detected cycle, aborting traversal\n"));
> +				iter->error = 1;
> +				iter->cont = 0;
> +				return;
> +			}
> +		}
> +	}
> +
> +	iter->prev = iter->curr;
> +	iter->curr = o_next;
> +}
> +
> +#define list_for_each(iter, o_head) \
> +	for (list_first(&(iter), o_head); (iter).cont; list_next(&(iter)))
> +
> +#define list_for_each_from(iter, o_head) \
> +	for (list_first_from(&(iter), o_head); (iter).cont; list_next(&(iter)))
> +
>  int kt_hlist_head_for_each_node (char *addr, int(*func)(void *,offset), void *data)
>  {
>  	char *b = NULL;
> @@ -995,7 +1306,8 @@ static int add_task(offset off_task, int *pid_reserve, char *task)
>  			 * FIXME: use the size obtained from debuginfo
>  			 */
>  			rsp += 0x148;
> -			target_read_raw_memory(rsp - 0x8 * (1 + 6), (void*)regs, 0x8 * 6);
> +			if (target_read_raw_memory(rsp - 0x8 * (1 + 6), (void*)regs, 0x8 * 6))
> +				warning(_("Could not read regs\n"));
>  
>  			regcache_raw_supply(rc, 15, &regs[5]);
>  			regcache_raw_supply(rc, 14, &regs[4]);
> @@ -1026,7 +1338,6 @@ static int add_task(offset off_task, int *pid_reserve, char *task)
>  			REG(reg_RSP,sp);
>  			task_info->sp = reg;
>  			REG(reg_RIP,ip);
> -			printf ("task %p cpu %02d rip = %p\n", (void*)task_info->task_struct, cpu, reg);
>  			task_info->ip = reg;
>  			REG(reg_RAX,ax);
>  			REG(reg_RCX,cx);
> @@ -1092,13 +1403,860 @@ static int add_task(offset off_task, int *pid_reserve, char *task)
>  	return 0;
>  }
>  
> +struct list_head {
> +	offset next;
> +	offset prev;
> +};
> +
> +struct page {
> +	unsigned long flags;
> +	struct list_head lru;
> +	offset first_page;
> +	int valid;
> +};
> +
> +enum slab_type {
> +	slab_partial,
> +	slab_full,
> +	slab_free
> +};
> +
> +static const char *slab_type_names[] = {
> +	"partial",
> +	"full",
> +	"free"
> +};
> +
> +enum ac_type {
> +	ac_percpu,
> +	ac_shared,
> +	ac_alien
> +};
> +
> +static const char *ac_type_names[] = {
> +	"percpu",
> +	"shared",
> +	"alien"
> +};
> +
> +typedef unsigned int kmem_bufctl_t;
> +#define BUFCTL_END      (((kmem_bufctl_t)(~0U))-0)
> +#define BUFCTL_FREE     (((kmem_bufctl_t)(~0U))-1)
> +#define BUFCTL_ACTIVE   (((kmem_bufctl_t)(~0U))-2)
> +#define SLAB_LIMIT      (((kmem_bufctl_t)(~0U))-3)
> +
> +
> +struct kmem_cache {
> +	offset o_cache;
> +	const char *name;
> +	unsigned int num;
> +	htab_t obj_ac;
> +	unsigned int buffer_size;
> +	int array_caches_inited;
> +	int broken;
> +};
> +
> +struct kmem_slab {
> +	offset o_slab;
> +	kmem_bufctl_t free;
> +	unsigned int inuse;
> +	offset s_mem;
> +	kmem_bufctl_t *bufctl;
> +};
> +
> +/* Cache of kmem_cache structs indexed by offset */
> +static htab_t kmem_cache_cache;
> +
> +/* List_head of all kmem_caches */
> +offset o_slab_caches;
> +
> +/* Just get the least significant bits of the offset */
> +static hashval_t kmem_cache_hash(const void *p)
> +{
> +	return ((struct kmem_cache*)p)->o_cache;
> +}
> +
> +static int kmem_cache_eq(const void *cache, const void *off)
> +{
> +	return (((struct kmem_cache*)cache)->o_cache == *(offset *)off);
> +}
> +
> +struct kmem_ac {
> +	offset offset;
> +	enum ac_type type;
> +	/* At which node cache resides (-1 for percpu) */
> +	int at_node;
> +	/* For which node or cpu the cache is (-1 for shared) */
> +	int for_node_cpu;
> +};
> +
> +/* A mapping between object's offset and array_cache */
> +struct kmem_obj_ac {
> +	offset obj;
> +	struct kmem_ac *ac;
> +};
> +
> +static hashval_t kmem_ac_hash(const void *p)
> +{
> +	return ((struct kmem_obj_ac*)p)->obj;
> +}
> +
> +static int kmem_ac_eq(const void *obj, const void *off)
> +{
> +	return (((struct kmem_obj_ac*)obj)->obj == *(offset *)off);
> +}
> +
> +//FIXME: support the CONFIG_PAGEFLAGS_EXTENDED variant?
> +#define PageTail(page)	(page.flags & 1UL << PG_tail)
> +#define PageSlab(page)	(page.flags & 1UL << PG_slab)
> +
> +//TODO: get this via libkdumpfile somehow?
> +#define VMEMMAP_START	0xffffea0000000000UL
> +#define PAGE_SHIFT	12
> +
> +static unsigned long long memmap = VMEMMAP_START;
> +
> +static offset pfn_to_page_memmap(unsigned long pfn)
> +{
> +	return memmap + pfn*GET_TYPE_SIZE(page);
> +}
> +
> +//TODO: once the config querying below works, support all variants
> +#define pfn_to_page(pfn) pfn_to_page_memmap(pfn)
> +
> +static kdump_paddr_t transform_memory(kdump_paddr_t addr);
> +
> +static unsigned long addr_to_pfn(offset addr)
> +{
> +	kdump_paddr_t pa = transform_memory(addr);
> +
> +	return pa >> PAGE_SHIFT;
> +}
> +
> +#define virt_to_opage(addr)	pfn_to_page(addr_to_pfn(addr))
> +static int check_slab_obj(offset obj);
> +static int init_kmem_caches(void);
> +static struct page virt_to_head_page(offset addr);
> +
> +
> +//TODO: have some hashtable-based cache as well?
> +static struct kmem_slab *
> +init_kmem_slab(struct kmem_cache *cachep, offset o_slab)
> +{
> +	char b_slab[GET_TYPE_SIZE(slab)];
> +	struct kmem_slab *slab;
> +	offset o_bufctl = o_slab + GET_TYPE_SIZE(slab);
> +	size_t bufctl_size = cachep->num * sizeof(kmem_bufctl_t);
> +	//FIXME: use target's kmem_bufctl_t typedef, which didn't work in
> +	//INIT_BASE_TYPE though
> +	size_t bufctl_size_target = cachep->num * GET_TYPE_SIZE(_int);
> +	char b_bufctl[bufctl_size_target];
> +	int i;
> +
> +	if (KDUMP_TYPE_GET(slab, o_slab, b_slab)) {
> +		warning(_("error reading struct slab %llx of cache %s\n"),
> +							o_slab, cachep->name);
> +		return NULL;
> +	}
> +
> +	slab = malloc(sizeof(struct kmem_slab));
> +
> +	slab->o_slab = o_slab;
> +	slab->inuse = kt_int_value(b_slab + MEMBER_OFFSET(slab, inuse));
> +	slab->free = kt_int_value(b_slab + MEMBER_OFFSET(slab, free));
> +	slab->s_mem = kt_ptr_value(b_slab + MEMBER_OFFSET(slab, s_mem));
> +
> +	slab->bufctl = malloc(bufctl_size);
> +	if (target_read_raw_memory(o_bufctl, (void *) b_bufctl,
> +				bufctl_size_target)) {
> +		warning(_("error reading bufctl %llx of slab %llx of cache %s\n"),
> +						o_bufctl, o_slab, cachep->name);
> +		for (i = 0; i < cachep->num; i++)
> +			slab->bufctl[i] = BUFCTL_END;
> +
> +		return slab;
> +	}
> +
> +	for (i = 0; i < cachep->num; i++)
> +		slab->bufctl[i] = kt_int_value(b_bufctl + i*GET_TYPE_SIZE(_int));
> +
> +	return slab;
> +}
> +
> +static void free_kmem_slab(struct kmem_slab *slab)
> +{
> +	free(slab->bufctl);
> +	free(slab);
> +}
> +
> +static unsigned int
> +check_kmem_slab(struct kmem_cache *cachep, struct kmem_slab *slab,
> +							enum slab_type type)
> +{
> +	unsigned int counted_free = 0;
> +	kmem_bufctl_t i;
> +	offset o_slab = slab->o_slab;
> +	offset o_obj, o_prev_obj = 0;
> +	struct page page;
> +	offset o_page_cache, o_page_slab;
> +
> +	i = slab->free;
> +	while (i != BUFCTL_END) {
> +		counted_free++;
> +
> +		if (counted_free > cachep->num) {
> +			printf("free bufctl cycle detected in slab %llx\n", o_slab);
> +			break;
> +		}
> +		if (i > cachep->num) {
> +			printf("bufctl value overflow (%d) in slab %llx\n", i, o_slab);
> +			break;
> +		}
> +
> +		i = slab->bufctl[i];
> +	}
> +
> +//	printf("slab inuse=%d cnt_free=%d num=%d\n", slab->inuse, counted_free,
> +//								cachep->num);
> +
> +	if (slab->inuse + counted_free != cachep->num)
> +		 printf("slab %llx #objs mismatch: inuse=%d + cnt_free=%d != num=%d\n",
> +				o_slab, slab->inuse, counted_free, cachep->num);
> +
> +	switch (type) {
> +	case slab_partial:
> +		if (!slab->inuse)
> +			printf("slab %llx has zero inuse but is on slabs_partial\n", o_slab);
> +		else if (slab->inuse == cachep->num)
> +			printf("slab %llx is full (%d) but is on slabs_partial\n", o_slab, slab->inuse);
> +		break;
> +	case slab_full:
> +		if (!slab->inuse)
> +			printf("slab %llx has zero inuse but is on slabs_full\n", o_slab);
> +		else if (slab->inuse < cachep->num)
> +			printf("slab %llx has %d/%d inuse but is on slabs_full\n", o_slab, slab->inuse, cachep->num);
> +		break;
> +	case slab_free:
> +		if (slab->inuse)
> +			printf("slab %llx has %d/%d inuse but is on slabs_empty\n", o_slab, slab->inuse, cachep->num);
> +		break;
> +	default:
> +		exit(1);
> +	}
> +
> +	for (i = 0; i < cachep->num; i++) {
> +		o_obj = slab->s_mem + i * cachep->buffer_size;
> +		if (o_prev_obj >> PAGE_SHIFT == o_obj >> PAGE_SHIFT)
> +			continue;
> +
> +		o_prev_obj = o_obj;
> +		page = virt_to_head_page(o_obj);
> +		if (!page.valid) {
> +			warning(_("slab %llx object %llx could not read struct page\n"),
> +					o_slab, o_obj);
> +			continue;
> +		}
> +		if (!PageSlab(page))
> +			warning(_("slab %llx object %llx is not on PageSlab page\n"),
> +					o_slab, o_obj);
> +		o_page_cache = page.lru.next;
> +		o_page_slab = page.lru.prev;
> +
> +		if (o_page_cache != cachep->o_cache)
> +			warning(_("cache %llx (%s) object %llx is on page where lru.next points to %llx and not the cache\n"),
> +					cachep->o_cache, cachep->name, o_obj,
> +					o_page_cache);
> +		if (o_page_slab != o_slab)
> +			warning(_("slab %llx object %llx is on page where lru.prev points to %llx and not the slab\n"),
> +					o_slab, o_obj, o_page_slab);
> +	}
> +
> +	return counted_free;
> +}
> +
> +static unsigned long
> +check_kmem_slabs(struct kmem_cache *cachep, offset o_slabs,
> +							enum slab_type type)
> +{
> +	struct list_iter iter;
> +	offset o_slab;
> +	struct kmem_slab *slab;
> +	unsigned long counted_free = 0;
> +
> +//	printf("checking slab list %llx type %s\n", o_slabs,
> +//							slab_type_names[type]);
> +
> +	list_for_each(iter, o_slabs) {
> +		o_slab = iter.curr - MEMBER_OFFSET(slab, list);
> +//		printf("found slab: %llx\n", o_slab);
> +		slab = init_kmem_slab(cachep, o_slab);
> +		if (!slab)
> +			continue;
> +
> +		counted_free += check_kmem_slab(cachep, slab, type);
> +		free_kmem_slab(slab);
> +	}
> +
> +	return counted_free;
> +}
> +
> +/* Check that o_obj points to an object on slab of kmem_cache */
> +static void check_kmem_obj(struct kmem_cache *cachep, offset o_obj)
> +{
> +	struct page page;
> +	offset o_cache, o_slab;
> +	offset obj_base;
> +	unsigned int idx;
> +	struct kmem_slab *slabp;
> +
> +	page = virt_to_head_page(o_obj);
> +
> +	if (!PageSlab(page))
> +		warning(_("object %llx is not on PageSlab page\n"), o_obj);
> +
> +	o_cache = page.lru.next;
> +	if (o_cache != cachep->o_cache)
> +		warning(_("object %llx is on page that should belong to cache "
> +				"%llx (%s), but lru.next points to %llx\n"),
> +				o_obj, cachep->o_cache, cachep->name, o_obj);
> +
> +	o_slab = page.lru.prev;
> +	slabp = init_kmem_slab(cachep, o_slab);
> +
> +	//TODO: check also that slabp is in appropriate lists? could be quite slow...
> +	if (!slabp)
> +		return;
> +
> +	//TODO: kernel implementation uses reciprocal_divide, check?
> +	idx = (o_obj - slabp->s_mem) / cachep->buffer_size;
> +	obj_base = slabp->s_mem + idx * cachep->buffer_size;
> +
> +	if (obj_base != o_obj)
> +		warning(_("pointer %llx should point to beginning of object "
> +				"but object's address is %llx\n"), o_obj,
> +				obj_base);
> +
> +	if (idx >= cachep->num)
> +		warning(_("object %llx has index %u, but there should be only "
> +				"%u objects on slabs of cache %llx"),
> +				o_obj, idx, cachep->num, cachep->o_cache);
> +}
> +
> +static void init_kmem_array_cache(struct kmem_cache *cachep,
> +		offset o_array_cache, char *b_array_cache, enum ac_type type,
> +		int id1, int id2)
> +{
> +	unsigned int avail, limit, i;
> +	char *b_entries;
> +	offset o_entries = o_array_cache + MEMBER_OFFSET(array_cache, entry);
> +	offset o_obj;
> +	void **slot;
> +	struct kmem_ac *ac;
> +	struct kmem_obj_ac *obj_ac;
> +
> +	avail = kt_int_value(b_array_cache + MEMBER_OFFSET(array_cache, avail));
> +	limit = kt_int_value(b_array_cache + MEMBER_OFFSET(array_cache, limit));
> +
> +//	printf("found %s[%d,%d] array_cache %llx\n", ac_type_names[type],
> +//						id1, id2, o_array_cache);
> +//	printf("avail=%u limit=%u entries=%llx\n", avail, limit, o_entries);
> +
> +	if (avail > limit)
> +		printf("array_cache %llx has avail=%d > limit=%d\n",
> +						o_array_cache, avail, limit);
> +
> +	if (!avail)
> +		return;
> +
> +	ac = malloc(sizeof(struct kmem_ac));
> +	ac->offset = o_array_cache;
> +	ac->type = type;
> +	ac->at_node = id1;
> +	ac->for_node_cpu = id2;
> +
> +	b_entries = malloc(avail * GET_TYPE_SIZE(_voidp));
> +
> +	if (target_read_raw_memory(o_entries, (void *)b_entries,
> +					avail *	GET_TYPE_SIZE(_voidp))) {
> +		warning(_("could not read entries of array_cache %llx of cache %s\n"),
> +						o_array_cache, cachep->name);
> +		goto done;
> +	}
> +
> +	for (i = 0; i < avail; i++) {
> +		o_obj = kt_ptr_value(b_entries + i * GET_TYPE_SIZE(_voidp));
> +		//printf("cached obj: %llx\n", o_obj);
> +
> +		slot = htab_find_slot_with_hash(cachep->obj_ac, &o_obj, o_obj,
> +								INSERT);
> +
> +		if (*slot)
> +			printf("obj %llx already in array_cache!\n", o_obj);
> +
> +		obj_ac = malloc(sizeof(struct kmem_obj_ac));
> +		obj_ac->obj = o_obj;
> +		obj_ac->ac = ac;
> +
> +		*slot = obj_ac;
> +
> +		check_kmem_obj(cachep, o_obj);
> +	}
> +
> +done:
> +	free(b_entries);
> +}
> +
> +/* Array of array_caches, such as kmem_cache.array or *kmem_list3.alien */
> +static void init_kmem_array_caches(struct kmem_cache *cachep, char * b_caches,
> +					int id1, int nr_ids, enum ac_type type)
> +{
> +	char b_array_cache[GET_TYPE_SIZE(array_cache)];
> +	offset o_array_cache;
> +	int id;
> +
> +	for (id = 0; id < nr_ids; id++, b_caches += GET_TYPE_SIZE(_voidp)) {
> +		/*
> +		 * A node cannot have alien cache on the same node, but some
> +		 * kernels (-xen) apparently don't have the corresponding
> +		 * array_cache pointer NULL, so skip it now.
> +		 */
> +		if (type == ac_alien && id1 == id)
> +			continue;
> +		o_array_cache = kt_ptr_value(b_caches);
> +		if (!o_array_cache)
> +			continue;
> +		if (KDUMP_TYPE_GET(array_cache, o_array_cache, b_array_cache)) {
> +			warning(_("could not read array_cache %llx of cache %s type %s id1=%d id2=%d\n"),
> +					o_array_cache, cachep->name,
> +					ac_type_names[type], id1,
> +					type == ac_shared ? -1 : id);
> +			continue;
> +		}
> +		init_kmem_array_cache(cachep, o_array_cache, b_array_cache,
> +			type, id1, type == ac_shared ? -1 : id);
> +	}
> +}
> +
> +static void init_kmem_list3_arrays(struct kmem_cache *cachep, offset o_list3,
> +								int nid)
> +{
> +	char b_list3[GET_TYPE_SIZE(kmem_list3)];
> +	char *b_shared_caches;
> +	offset o_alien_caches;
> +	char b_alien_caches[nr_node_ids * GET_TYPE_SIZE(_voidp)];
> +
> +	if (KDUMP_TYPE_GET(kmem_list3, o_list3, b_list3)) {
> +                warning(_("error reading kmem_list3 %llx of nid %d of kmem_cache %llx name %s\n"),
> +				o_list3, nid, cachep->o_cache, cachep->name);
> +		return;
> +	}
> +
> +	/* This is a single pointer, but treat it as array to reuse code */
> +	b_shared_caches = b_list3 + MEMBER_OFFSET(kmem_list3, shared);
> +	init_kmem_array_caches(cachep, b_shared_caches, nid, 1, ac_shared);
> +
> +	o_alien_caches = kt_ptr_value(b_list3 + 
> +					MEMBER_OFFSET(kmem_list3, alien));
> +
> +	//TODO: check that this only happens for single-node systems?
> +	if (!o_alien_caches)
> +		return;
> +
> +	if (target_read_raw_memory(o_alien_caches, (void *)b_alien_caches,
> +					nr_node_ids * GET_TYPE_SIZE(_voidp))) {
> +		warning(_("could not read alien array %llx of kmem_list3 %llx of nid %d of cache %s\n"),
> +				o_alien_caches, o_list3, nid, cachep->name);
> +	}
> +
> +
> +	init_kmem_array_caches(cachep, b_alien_caches, nid, nr_node_ids,
> +								ac_alien);
> +}
> +
> +static void check_kmem_list3_slabs(struct kmem_cache *cachep,
> +						offset o_list3,	int nid)
> +{
> +	char b_list3[GET_TYPE_SIZE(kmem_list3)];
> +	offset o_lhb;
> +	unsigned long counted_free = 0;
> +	unsigned long free_objects;
> +
> +	if(KDUMP_TYPE_GET(kmem_list3, o_list3, b_list3)) {
> +                warning(_("error reading kmem_list3 %llx of nid %d of kmem_cache %llx name %s\n"),
> +				o_list3, nid, cachep->o_cache, cachep->name);
> +		return;
> +	}
> +
> +	free_objects = kt_long_value(b_list3 + MEMBER_OFFSET(kmem_list3,
> +							free_objects));
> +
> +	o_lhb = o_list3 + MEMBER_OFFSET(kmem_list3, slabs_partial);
> +	counted_free += check_kmem_slabs(cachep, o_lhb, slab_partial);
> +
> +	o_lhb = o_list3 + MEMBER_OFFSET(kmem_list3, slabs_full);
> +	counted_free += check_kmem_slabs(cachep, o_lhb, slab_full);
> +
> +	o_lhb = o_list3 + MEMBER_OFFSET(kmem_list3, slabs_free);
> +	counted_free += check_kmem_slabs(cachep, o_lhb, slab_free);
> +
> +//	printf("free=%lu counted=%lu\n", free_objects, counted_free);
> +	if (free_objects != counted_free)
> +		warning(_("cache %s should have %lu free objects but we counted %lu\n"),
> +				cachep->name, free_objects, counted_free);
> +}
> +
> +static struct kmem_cache *init_kmem_cache(offset o_cache)
> +{
> +	struct kmem_cache *cache;
> +	char b_cache[GET_TYPE_SIZE(kmem_cache)];
> +	offset o_cache_name;
> +	void **slot;
> +
> +	if (!kmem_cache_cache)
> +		init_kmem_caches();
> +
> +	slot = htab_find_slot_with_hash(kmem_cache_cache, &o_cache, o_cache,
> +								INSERT);
> +	if (*slot) {
> +		cache = (struct kmem_cache*) *slot;
> +//		printf("kmem_cache %s found in hashtab!\n", cache->name);
> +		return cache;
> +	}
> +
> +//	printf("kmem_cache %llx not found in hashtab, inserting\n", o_cache);
> +
> +	cache = malloc(sizeof(struct kmem_cache));
> +	cache->o_cache = o_cache;
> +
> +	if (KDUMP_TYPE_GET(kmem_cache, o_cache, b_cache)) {
> +		warning(_("error reading contents of kmem_cache at %llx\n"),
> +								o_cache);
> +		cache->broken = 1;
> +		cache->name = "(broken)";
> +		goto done;
> +	}
> +
> +	cache->num = kt_int_value(b_cache + MEMBER_OFFSET(kmem_cache, num));
> +	cache->buffer_size = kt_int_value(b_cache + MEMBER_OFFSET(kmem_cache,
> +								buffer_size));
> +	cache->array_caches_inited = 0;
> +
> +	o_cache_name = kt_ptr_value(b_cache + MEMBER_OFFSET(kmem_cache,name));
> +	if (!o_cache_name) {
> +		fprintf(stderr, "cache name pointer NULL\n");
> +		cache->name = "(null)";
> +	}
> +
> +	cache->name = kt_strndup(o_cache_name, 128);
> +	cache->broken = 0;
> +//	printf("cache name is: %s\n", cache->name);
> +
> +done:
> +	*slot = cache;
> +	return cache;
> +}
> +
> +static void init_kmem_cache_arrays(struct kmem_cache *cache)
> +{
> +	char b_cache[GET_TYPE_SIZE(kmem_cache)];
> +	char *b_nodelists, *b_array_caches;
> +	offset o_nodelist, o_array_cache;
> +	char *nodelist, *array_cache;
> +	int node;
> +
> +	if (cache->array_caches_inited || cache->broken)
> +		return;
> +
> +	if (KDUMP_TYPE_GET(kmem_cache, cache->o_cache, b_cache)) {
> +		warning(_("error reading contents of kmem_cache at %llx\n"),
> +							cache->o_cache);
> +		return;
> +	}
> +
> +
> +	cache->obj_ac = htab_create_alloc(64, kmem_ac_hash, kmem_ac_eq,
> +						NULL, xcalloc, xfree);
> +
> +	b_nodelists = b_cache + MEMBER_OFFSET(kmem_cache, nodelists);
> +	for (node = 0; node < nr_node_ids;
> +			node++, b_nodelists += GET_TYPE_SIZE(_voidp)) {
> +		o_nodelist = kt_ptr_value(b_nodelists);
> +		if (!o_nodelist)
> +			continue;
> +//		printf("found nodelist[%d] %llx\n", node, o_nodelist);
> +		init_kmem_list3_arrays(cache, o_nodelist, node);
> +	}
> +
> +	b_array_caches = b_cache + MEMBER_OFFSET(kmem_cache, array);
> +	init_kmem_array_caches(cache, b_array_caches, -1, nr_cpu_ids,
> +								ac_percpu);
> +
> +	cache->array_caches_inited = 1;
> +}
> +
> +static void check_kmem_cache(struct kmem_cache *cache)
> +{
> +	char b_cache[GET_TYPE_SIZE(kmem_cache)];
> +	char *b_nodelists, *b_array_caches;
> +	offset o_nodelist, o_array_cache;
> +	char *nodelist, *array_cache;
> +	int node;
> +
> +	init_kmem_cache_arrays(cache);
> +
> +	if (KDUMP_TYPE_GET(kmem_cache, cache->o_cache, b_cache)) {
> +		warning(_("error reading contents of kmem_cache at %llx\n"),
> +							cache->o_cache);
> +		return;
> +	}
> +
> +	b_nodelists = b_cache + MEMBER_OFFSET(kmem_cache, nodelists);
> +	for (node = 0; node < nr_node_ids;
> +			node++, b_nodelists += GET_TYPE_SIZE(_voidp)) {
> +		o_nodelist = kt_ptr_value(b_nodelists);
> +		if (!o_nodelist)
> +			continue;
> +//		printf("found nodelist[%d] %llx\n", node, o_nodelist);
> +		check_kmem_list3_slabs(cache, o_nodelist, node);
> +	}
> +}
> +
> +static int init_kmem_caches(void)
> +{
> +	offset o_kmem_cache;
> +	struct list_iter iter;
> +	offset o_nr_node_ids, o_nr_cpu_ids;
> +
> +	kmem_cache_cache = htab_create_alloc(64, kmem_cache_hash,
> +					kmem_cache_eq, NULL, xcalloc, xfree);
> +
> +	o_slab_caches = get_symbol_value("slab_caches");
> +	if (! o_slab_caches) {
> +		o_slab_caches = get_symbol_value("cache_chain");
> +		if (!o_slab_caches) {
> +			warning(_("Cannot find slab_caches\n"));
> +			return -1;
> +		}
> +	}
> +	printf("slab_caches: %llx\n", o_slab_caches);
> +
> +	o_nr_cpu_ids = get_symbol_value("nr_cpu_ids");
> +	if (! o_nr_cpu_ids) {
> +		warning(_("nr_cpu_ids not found, assuming 1 for !SMP"));
> +	} else {
> +		printf("o_nr_cpu_ids = %llx\n", o_nr_cpu_ids);
> +		nr_cpu_ids = kt_int_value_off(o_nr_cpu_ids);
> +		printf("nr_cpu_ids = %d\n", nr_cpu_ids);
> +	}
> +
> +	o_nr_node_ids = get_symbol_value("nr_node_ids");
> +	if (! o_nr_node_ids) {
> +		warning(_("nr_node_ids not found, assuming 1 for !NUMA"));
> +	} else {
> +		printf("o_nr_node_ids = %llx\n", o_nr_node_ids);
> +		nr_node_ids = kt_int_value_off(o_nr_node_ids);
> +		printf("nr_node_ids = %d\n", nr_node_ids);
> +	}
> +
> +	list_for_each(iter, o_slab_caches) {
> +		o_kmem_cache = iter.curr - MEMBER_OFFSET(kmem_cache,list);
> +//		printf("found kmem cache: %llx\n", o_kmem_cache);
> +
> +		init_kmem_cache(o_kmem_cache);
> +	}
> +
> +	return 0;
> +}
> +
> +static void check_kmem_caches(void)
> +{
> +	offset o_lhb, o_kmem_cache;
> +	struct list_iter iter;
> +	struct kmem_cache *cache;
> +
> +	if (!kmem_cache_cache)
> +		init_kmem_caches();
> +
> +	list_for_each(iter, o_slab_caches) {
> +		o_kmem_cache = iter.curr - MEMBER_OFFSET(kmem_cache,list);
> +
> +		cache = init_kmem_cache(o_kmem_cache);
> +		printf("checking kmem cache %llx name %s\n", o_kmem_cache,
> +				cache->name);
> +		if (cache->broken) {
> +			printf("cache is too broken, skipping");
> +			continue;
> +		}
> +		check_kmem_cache(cache);
> +	}
> +}
> +
> +
> +
> +
> +static struct page read_page(offset o_page)
> +{
> +	char b_page[GET_TYPE_SIZE(page)];
> +	struct page page;
> +
> +	if (KDUMP_TYPE_GET(page, o_page, b_page)) {
> +		page.valid = 0;
> +		return page;
> +	}
> +
> +	page.flags = kt_long_value(b_page + MEMBER_OFFSET(page, flags));
> +	page.lru.next = kt_ptr_value(b_page + MEMBER_OFFSET(page, lru)
> +					+ MEMBER_OFFSET(list_head, next));
> +	page.lru.prev = kt_ptr_value(b_page + MEMBER_OFFSET(page, lru)
> +					+ MEMBER_OFFSET(list_head, prev));
> +	page.first_page = kt_ptr_value(b_page +
> +					MEMBER_OFFSET(page, first_page));
> +	page.valid = 1;
> +
> +	return page;
> +}
> +
> +static inline struct page compound_head(struct page page)
> +{
> +	if (page.valid && PageTail(page))
> +		return read_page(page.first_page);
> +	return page;
> +}
> +
> +static struct page virt_to_head_page(offset addr)
> +{
> +	struct page page;
> +
> +	page = read_page(virt_to_opage(addr));
> +
> +	return compound_head(page);
> +}
> +
> +static int check_slab_obj(offset obj)
> +{
> +	struct page page;
> +	offset o_cache, o_slab;
> +	struct kmem_cache *cachep;
> +	struct kmem_slab *slabp;
> +	struct kmem_obj_ac *obj_ac;
> +	struct kmem_ac *ac;
> +	unsigned int idx;
> +	offset obj_base;
> +	unsigned int i, cnt = 0;
> +	int free = 0;
> +
> +	page = virt_to_head_page(obj);
> +
> +	if (!page.valid) {
> +		warning(_("unable to read struct page for object at %llx\n"),
> +				obj);
> +		return 0;
> +	}
> +
> +	if (!PageSlab(page))
> +		return 0;
> +
> +	o_cache = page.lru.next;
> +	o_slab = page.lru.prev;
> +	printf("pointer %llx is on slab %llx of cache %llx\n", obj, o_slab,
> +								o_cache);
> +
> +	cachep = init_kmem_cache(o_cache);
> +	init_kmem_cache_arrays(cachep);
> +	slabp = init_kmem_slab(cachep, o_slab);
> +
> +	//TODO: kernel implementation uses reciprocal_divide, check?
> +	idx = (obj - slabp->s_mem) / cachep->buffer_size;
> +	obj_base = slabp->s_mem + idx * cachep->buffer_size;
> +
> +	printf("pointer is to object %llx with index %u\n", obj_base, idx);
> +
> +	i = slabp->free;
> +	while (i != BUFCTL_END) {
> +		cnt++;
> +
> +		if (cnt > cachep->num) {
> +			printf("free bufctl cycle detected in slab %llx\n", o_slab);
> +			break;
> +		}
> +		if (i > cachep->num) {
> +			printf("bufctl value overflow (%d) in slab %llx\n", i, o_slab);
> +			break;
> +		}
> +
> +		if (i == idx)
> +			free = 1;
> +
> +		i = slabp->bufctl[i];
> +	}
> +
> +	printf("object is %s\n", free ? "free" : "allocated");
> +
> +	obj_ac = htab_find_with_hash(cachep->obj_ac, &obj, obj);
> +
> +	if (obj_ac) {
> +		ac = obj_ac->ac;
> +		printf("object is in array_cache %llx type %s[%d,%d]\n",
> +			ac->offset, ac_type_names[ac->type], ac->at_node,
> +			ac->for_node_cpu);
> +	}
> +
> +	free_kmem_slab(slabp);
> +
> +	return 1;
> +}
> +
> +static int init_memmap(void)
> +{
> +	const char *cfg;
> +	offset o_mem_map;
> +	offset o_page;
> +	struct page page;
> +	unsigned long long p_memmap;
> +
> +	//FIXME: why are all NULL?
> +
> +	cfg = kdump_vmcoreinfo_row(dump_ctx, "CONFIG_FLATMEM");
> +	printf("CONFIG_FLATMEM=%s\n", cfg ? cfg : "(null)");
> +
> +	cfg = kdump_vmcoreinfo_row(dump_ctx, "CONFIG_DISCONTIGMEM");
> +	printf("CONFIG_DISCONTIGMEM=%s\n", cfg ? cfg : "(null)");
> +
> +	cfg = kdump_vmcoreinfo_row(dump_ctx, "CONFIG_SPARSEMEM_VMEMMAP");
> +	printf("CONFIG_SPARSEMEM_VMEMMAP=%s\n", cfg ? cfg : "(null)");
> +
> +	o_mem_map = get_symbol_value("mem_map");
> +	printf("memmap: %llx\n", o_mem_map);
> +
> +	if (o_mem_map) {
> +		p_memmap = kt_ptr_value_off(o_mem_map);
> +		printf("memmap is pointer to: %llx\n", p_memmap);
> +		if (p_memmap != -1)
> +			memmap = p_memmap;
> +	}
> +
> +/*
> +	o_page = virt_to_opage(0xffff880138bedf40UL);
> +	printf("ffff880138bedf40 is page %llx\n", o_page);
> +
> +	page = read_page(o_page);
> +	printf("flags=%lx lru=(%llx,%llx) first_page=%llx\n",page.flags,
> +			page.lru.next, page.lru.prev, page.first_page);
> +	printf("PG_slab=%llx\n", get_symbol_value("PG_slab"));
> +	printf("PageSlab(page)==%d\n", PageSlab(page));
> +*/
> +	return 0;
> +}
> +
>  static int init_values(void);
>  static int init_values(void)
>  {
>  	struct symbol *s;
>  	char *b = NULL, *init_task = NULL, *task = NULL;
> -	offset off, off_task, rsp, rip, _rsp;
> +	offset off, o_task, rsp, rip, _rsp;
>  	offset tasks;
> +	offset o_tasks;
> +	offset off_task;
>  	offset stack;
>  	offset o_init_task;
>  	int state;
> @@ -1108,6 +2266,7 @@ static int init_values(void)
>  	int cnt = 0;
>  	int pid_reserve;
>  	struct task_info *task_info;
> +	struct list_iter iter;
>  
>  	s = NULL;
>  	
> @@ -1141,58 +2300,59 @@ static int init_values(void)
>  		goto error;
>  	task = KDUMP_TYPE_ALLOC(task_struct);
>  	if (!task) goto error;
> +
>  	if (KDUMP_TYPE_GET(task_struct, o_init_task, init_task))
>  		goto error;
>  	tasks = kt_ptr_value(init_task + MEMBER_OFFSET(task_struct,tasks));
> +	o_tasks = o_init_task + MEMBER_OFFSET(task_struct, tasks);
>  
>  	i = 0;
> -	off = 0;
>  	pid_reserve = 50000;
>  
>  	print_thread_events = 0;
>  	in = current_inferior();
>  	inferior_appeared (in, 1);
>  
> -	list_head_for_each(tasks, init_task + MEMBER_OFFSET(task_struct,tasks), off) {
> -		
> +	list_for_each_from(iter, o_tasks) {
> +
>  		struct thread_info *info;
>  		int pid;
>  		ptid_t tt;
>  		struct regcache *rc;
>  		long long val;
>  		offset main_tasks, mt;
> -		
> +		struct list_iter iter_thr;
> +		offset o_threads;
>  
>  		//fprintf(stderr, __FILE__":%d: ok\n", __LINE__);
>  		off_task = off - MEMBER_OFFSET(task_struct,tasks);
>  		if (KDUMP_TYPE_GET(task_struct, off_task, task)) continue;
>  
> -		main_tasks = off_task;//kt_ptr_value(task + MEMBER_OFFSET(task_struct,thread_group));
> +		o_task = iter.curr - MEMBER_OFFSET(task_struct, tasks);
> +		o_threads = o_task + MEMBER_OFFSET(task_struct, thread_group);
> +		list_for_each_from(iter_thr, o_threads) {
>  
> -		do {
> -		//list_head_for_each(main_tasks, task + MEMBER_OFFSET(task_struct,thread_group), mt) {
> -
> -			//off_task = mt - MEMBER_OFFSET(task_struct,thread_group);
> -			if (KDUMP_TYPE_GET(task_struct, off_task, task))  {
> +			o_task = iter_thr.curr - MEMBER_OFFSET(task_struct,
> +								thread_group);
> +			if (KDUMP_TYPE_GET(task_struct, o_task, task))
>  				continue;
> -			}
> -
> -			if (add_task(off_task, &pid_reserve, task)) {
> -
> -			} else {
> -				
> -				printf_unfiltered(_("Loaded processes: %d\r"), ++cnt);
> -			}
> -			off_task = kt_ptr_value(task + MEMBER_OFFSET(task_struct, thread_group)) - MEMBER_OFFSET(task_struct, thread_group);
> -			if (off_task == main_tasks) break;
>  
> -		} while (1);
> +			if (!add_task(o_task, &pid_reserve, task))
> +				printf_unfiltered(_("Loaded processes: %d\r"),
> +									++cnt);
> +		}
>  	}
>  
>  	if (b) free(b);
>  	if (init_task) free(init_task);
>  
>  	printf_unfiltered(_("Loaded processes: %d\n"), cnt);
> +	init_memmap();
> +
> +//	check_kmem_caches();
> +//	check_slab_obj(0xffff880138bedf40UL);
> +//	check_slab_obj(0xffff8801359734c0UL);
> +
>  	return 0;
>  error:
>  	if (b) free(b);
> @@ -1373,7 +2533,6 @@ core_detach (struct target_ops *ops, const char *args, int from_tty)
>  		printf_filtered (_("No core file now.\n"));
>  }
>  
> -static kdump_paddr_t transform_memory(kdump_paddr_t addr);
>  static kdump_paddr_t transform_memory(kdump_paddr_t addr)
>  {
>  	kdump_paddr_t out;
> @@ -1396,10 +2555,12 @@ kdump_xfer_partial (struct target_ops *ops, enum target_object object,
>  	{
>  		case TARGET_OBJECT_MEMORY:
>  			offset = transform_memory((kdump_paddr_t)offset);
> -			r = kdump_read(dump_ctx, (kdump_paddr_t)offset, (unsigned char*)readbuf, (size_t)len, KDUMP_PHYSADDR);
> +			r = kdump_read(dump_ctx, KDUMP_KPHYSADDR, (kdump_paddr_t)offset, (unsigned char*)readbuf, (size_t)len);
>  			if (r != len) {
> -				error(_("Cannot read %lu bytes from %lx (%lld)!"), (size_t)len, (long unsigned int)offset, (long long)r);
> -			} else
> +				warning(_("Cannot read %lu bytes from %lx (%lld)!"),
> +						(size_t)len, (long unsigned int)offset, (long long)r);
> +				return TARGET_XFER_E_IO;
> +			} else 
>  				*xfered_len = len;
>  
>  			return TARGET_XFER_OK;
> @@ -1797,7 +2958,9 @@ static void kdumpps_command(char *fn, int from_tty)
>  		if (!task) continue;
>  		if (task->cpu == -1) cpu[0] = '\0';
>  		else snprintf(cpu, 5, "% 4d", task->cpu);
> +#ifdef _DEBUG
>  		printf_filtered(_("% 7d %llx %llx %llx %-4s %s\n"), task->pid, task->task_struct, task->ip, task->sp, cpu, tp->name);
> +#endif
>  	}
>  }
>  

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Enable gdb to open Linux kernel dumps
  2016-02-01 11:51   ` Kieran Bingham
@ 2016-02-01 14:32     ` Ales Novak
  2016-02-01 15:01       ` Jeff Mahoney
  0 siblings, 1 reply; 31+ messages in thread
From: Ales Novak @ 2016-02-01 14:32 UTC (permalink / raw)
  To: Kieran Bingham; +Cc: gdb-patches, jeffm

On 2016-2-1 12:51, Kieran Bingham wrote:

>
> On 01/02/16 11:27, Kieran Bingham wrote:
>> Hi Ales,
>>
>> I'm just checking out your tree now to try locally.
>>
>> It sounds like there is a high level of cross over in our work, but I
>> believe our work can complement each other's if we work together.

Yes. Our primary intention is to open kdumps (i.e. dead images of the 
fallen kernels), but what can be shared between live and dead kernel 
debugging, should be shared...

>> On 31/01/16 21:44, Ales Novak wrote:
>>> Following patches are adding basic ability to access Linux kernel
>>> dumps using the libkdumpfile library. They're creating new target
>>> "kdump", so all one has to do is to provide appropriate debuginfo and
>>> then run "target kdump /path/to/vmcore".
>>>
>>> The tasks of the dumped kernel are mapped to threads in gdb.
>>>
>>> Except for that, there's a code adding understanding of Linux SLAB
>>> memory allocator, which means we can tell for the given address to
>>> which SLAB does it belong, or list objects for give SLAB name - and
>>> more.
>>>
>>> Patches are against "gdb-7.10-release" (but will apply elsewhere).
>>>
>>> Note: registers of task are fetched accordingly - either from the dump
>>> metadata (the active tasks) or from their stacks. It should be noted
>>> that as this mechanism varies amongst the kernel versions and
>>> configurations, my naive implementation currently covers only the
>>> dumps I encounter, handling of different kernel versions is to be
>>> added.
>> In the work that I am doing, I had expected this to be done in python
>> for exactly this reason. The kernel version specifics, (and architecture
>> specifics) can then live alongside their respective trees.
>>> In the near future, our plan is to remove the clumsy C-code handling
>>> this and reimplement it in Python - only the binding to certain gdb
>>> structures (e.g. thread, regcache) has to be added. A colleague of
>>> mine is already working on that.
>> This sounds exactly like the work I am doing right now.
>> Could you pass on my details to your colleague so we can discuss?
>
> Aha, or is your colleague Andreas Arnez? I'm just about to reply to his
> mail over on gbd@ next.

No, it's Jeff Mahoney. His current efforts, which include Python binding 
to threads' regcaches and more, are at:

https://jeffm.io/git/cgit.cgi/gnu/binutils-gdb/log/

And yes, you're right I've incorrectly removed autorship from some of his 
older patches (which in fact are not necessary for the current gdb-kdump 
to work, they are extending the Python binding).

And as you've already found, his older patches are at:

https://github.com/jeffmahoney/py-crash


>
>
>
>>
>> I recently made a posting on gdb@ suggesting the addition of a
>> gdb.Target object to work towards implementing this, and I have been
>> liasing with Jan Kiszka to manage the Linux/scripts/gdb/ integration.
>>
>>
>>
>>> The github home of these patches is at:
>>>
>>> https://github.com/alesax/gdb-kdump/tree/for-next
>>>
>>> libkdumpfile lives at:
>>>
>>> https://github.com/ptesarik/libkdumpfile
>>>
>>> Fork adding the SLAB support lives at:
>>>
>>> https://github.com/tehcaster/gdb-kdump/tree/slab-support
>>>
>>>
>> Regards
>>
>> Kieran Bingham
>>
>

-- 
Ales Novak

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Enable gdb to open Linux kernel dumps
  2016-02-01 14:32     ` Ales Novak
@ 2016-02-01 15:01       ` Jeff Mahoney
  2016-02-02  9:12         ` Kieran Bingham
  2016-02-10  3:24         ` Jeff Mahoney
  0 siblings, 2 replies; 31+ messages in thread
From: Jeff Mahoney @ 2016-02-01 15:01 UTC (permalink / raw)
  To: Ales Novak, Kieran Bingham; +Cc: gdb-patches

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 2/1/16 9:32 AM, Ales Novak wrote:
> On 2016-2-1 12:51, Kieran Bingham wrote:
> 
>> 
>> On 01/02/16 11:27, Kieran Bingham wrote:
>>> Hi Ales,
>>> 
>>> I'm just checking out your tree now to try locally.
>>> 
>>> It sounds like there is a high level of cross over in our work,
>>> but I believe our work can complement each other's if we work
>>> together.
> 
> Yes. Our primary intention is to open kdumps (i.e. dead images of
> the fallen kernels), but what can be shared between live and dead
> kernel debugging, should be shared...
> 
>>> On 31/01/16 21:44, Ales Novak wrote:
>>>> Following patches are adding basic ability to access Linux
>>>> kernel dumps using the libkdumpfile library. They're creating
>>>> new target "kdump", so all one has to do is to provide
>>>> appropriate debuginfo and then run "target kdump
>>>> /path/to/vmcore".
>>>> 
>>>> The tasks of the dumped kernel are mapped to threads in gdb.
>>>> 
>>>> Except for that, there's a code adding understanding of Linux
>>>> SLAB memory allocator, which means we can tell for the given
>>>> address to which SLAB does it belong, or list objects for
>>>> give SLAB name - and more.
>>>> 
>>>> Patches are against "gdb-7.10-release" (but will apply
>>>> elsewhere).
>>>> 
>>>> Note: registers of task are fetched accordingly - either from
>>>> the dump metadata (the active tasks) or from their stacks. It
>>>> should be noted that as this mechanism varies amongst the
>>>> kernel versions and configurations, my naive implementation
>>>> currently covers only the dumps I encounter, handling of
>>>> different kernel versions is to be added.
>>> In the work that I am doing, I had expected this to be done in
>>> python for exactly this reason. The kernel version specifics,
>>> (and architecture specifics) can then live alongside their
>>> respective trees.
>>>> In the near future, our plan is to remove the clumsy C-code
>>>> handling this and reimplement it in Python - only the binding
>>>> to certain gdb structures (e.g. thread, regcache) has to be
>>>> added. A colleague of mine is already working on that.
>>> This sounds exactly like the work I am doing right now. Could
>>> you pass on my details to your colleague so we can discuss?
>> 
>> Aha, or is your colleague Andreas Arnez? I'm just about to reply
>> to his mail over on gbd@ next.
> 
> No, it's Jeff Mahoney. His current efforts, which include Python
> binding to threads' regcaches and more, are at:
> 
> https://jeffm.io/git/cgit.cgi/gnu/binutils-gdb/log/
> 
> And yes, you're right I've incorrectly removed autorship from some
> of his older patches (which in fact are not necessary for the
> current gdb-kdump to work, they are extending the Python binding).
> 
> And as you've already found, his older patches are at:
> 
> https://github.com/jeffmahoney/py-crash

Hi guys -

Ales gave me the heads up that you were discussing these.  The github
repo is old and I haven't touched it in a year or so.  The link to my
git server is the active one, but I should be clear that this is
currently a WIP from my perspective.  I've been doing my work in the
rel-7.10.1-kdump branch, which is based on the gdb-7.10.1-release tag,
plus some SUSE patches to handle build-ids and external debuginfo files.

This branch is subject to rebasing as I make progess, but there should
be a stable base underneath it that I can condense and put into a
separate branch for public consumption.

- -Jeff

> 
>> 
>> 
>> 
>>> 
>>> I recently made a posting on gdb@ suggesting the addition of a 
>>> gdb.Target object to work towards implementing this, and I have
>>> been liasing with Jan Kiszka to manage the Linux/scripts/gdb/
>>> integration.
>>> 
>>> 
>>> 
>>>> The github home of these patches is at:
>>>> 
>>>> https://github.com/alesax/gdb-kdump/tree/for-next
>>>> 
>>>> libkdumpfile lives at:
>>>> 
>>>> https://github.com/ptesarik/libkdumpfile
>>>> 
>>>> Fork adding the SLAB support lives at:
>>>> 
>>>> https://github.com/tehcaster/gdb-kdump/tree/slab-support
>>>> 
>>>> 
>>> Regards
>>> 
>>> Kieran Bingham
>>> 
>> 
> 


- -- 
Jeff Mahoney
SUSE Labs
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
Comment: GPGTools - http://gpgtools.org

iQIcBAEBAgAGBQJWr3MyAAoJEB57S2MheeWyZiMQAJOV3EqzTFwRRMQbfxjHLq1+
/CWB8N3nAiwh40rHWenHkjYDA34zN6mY2lQoS1PQVZHt1SmOc6qujQmC+s7GAx/Y
MRnSqCp5F6err3bCLFx1tB/IhFboTFX8UfQvtjl9mNW5ghpA/Jyn5ler6h8A58Rc
yE71nstYYUKEyjn2tYCHTRVtE4d5wAOWhMlrhvX3iEvy5Etl6WcV0uXrznhFoMyd
QFZNJrYXSvLGgiZ9vToGOnpVk9eNHY7hfaxPViO7W0W++VxttbLQ4pbTzO7qMP3A
r2ajW0Cavm96YMBiVyNvSzfz4ANp4EQPL8b6ZnYyou4qUMhR+4RX9XbSc7TnnDhx
zUwchdDA8iiOw0Y1xc2Z2XTRUW6NgbhvHn5uKnUWkVFuZlXWb9WVjMpu34uYfJ3C
oQYZ3/93llf8nh9OajwzlqTIw+af2hxZDwFS9dc7uYz3SIC6CcXOj8wBOQ3U02n1
1fVYXSDKI0k4unuJnIvfcZ6hs9i8cdNWQr03dcrwwHoe3Uc4CJT/FLz9dpH6ZQXV
fQh/csaGuRS3V/DPnu+hQdo93NlPe95KHbx0nmU6/BGlE0g2DrzLf9z8G1/iEof1
QPew3gCdPWmJJ73JT3KqMrXfs5o0C9GSLZDkGADZkeF1Bmq8Qu+p9S88DuWJY9ja
KcfFj6E8MJw6JXmnUc4K
=WLYU
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 2/4] Add Jeff Mahoney's py-crash patches.
  2016-01-31 21:45 ` [PATCH 2/4] Add Jeff Mahoney's py-crash patches Ales Novak
  2016-02-01 12:35   ` Kieran Bingham
@ 2016-02-01 22:23   ` Doug Evans
  2016-02-02  2:56     ` Jeff Mahoney
  1 sibling, 1 reply; 31+ messages in thread
From: Doug Evans @ 2016-02-01 22:23 UTC (permalink / raw)
  To: Ales Novak; +Cc: gdb-patches

On Sun, Jan 31, 2016 at 1:44 PM, Ales Novak <alnovak@suse.cz> wrote:
> ---
>  gdb/Makefile.in              |  12 ++
>  gdb/python/py-minsymbol.c    | 353 +++++++++++++++++++++++++++++++++++++
>  gdb/python/py-objfile.c      |  29 +++-
>  gdb/python/py-section.c      | 401 +++++++++++++++++++++++++++++++++++++++++++
>  gdb/python/py-symbol.c       |  52 ++++--
>  gdb/python/python-internal.h |  14 ++
>  gdb/python/python.c          |   7 +-
>  7 files changed, 853 insertions(+), 15 deletions(-)
>  create mode 100644 gdb/python/py-minsymbol.c
>  create mode 100644 gdb/python/py-section.c


Hi.

Part of what this patch is doing is exporting bfd to python.
E.g., all the SEC_* constants.

As a rule we absolutely discourage people from using bfd outside of
the the binutils+gdb source tree.
Either this rule needs to change, or I don't think we can allow this patch.
I'd be interested to hear what others in the community think.

For myself, I would much rather export ELF separately (e.g., a separate
python API one can use independent of any particular tool, including gdb),
and then have gdb provide the necessary glue to use this API.
[I can imagine some compromises being needed, at least for now;
e.g., it'd be cumbersome to read in all ELF symbols twice.
But fixing that is just an optimization.]


> ...
> +  if (PyModule_AddIntConstant (gdb_module, "SEC_NO_FLAGS", SEC_NO_FLAGS) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_ALLOC", SEC_ALLOC) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_LOAD", SEC_LOAD) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_RELOC", SEC_RELOC) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_READONLY", SEC_READONLY) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_CODE", SEC_CODE) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_DATA", SEC_DATA) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_ROM", SEC_ROM) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_CONSTRUCTOR",
> +                                 SEC_CONSTRUCTOR) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_HAS_CONTENTS",
> +                                 SEC_HAS_CONTENTS) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_NEVER_LOAD",
> +                                 SEC_NEVER_LOAD) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_THREAD_LOCAL",
> +                                 SEC_THREAD_LOCAL) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_HAS_GOT_REF",
> +                                 SEC_HAS_GOT_REF) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_IS_COMMON",
> +                                 SEC_IS_COMMON) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_DEBUGGING",
> +                                 SEC_DEBUGGING) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_IN_MEMORY",
> +                                 SEC_IN_MEMORY) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_EXCLUDE", SEC_EXCLUDE) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_SORT_ENTRIES",
> +                                 SEC_SORT_ENTRIES) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_LINK_ONCE",
> +                                 SEC_LINK_ONCE) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_LINK_DUPLICATES",
> +                                 SEC_LINK_DUPLICATES) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_LINK_DUPLICATES_DISCARD",
> +                                 SEC_LINK_DUPLICATES_DISCARD) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_LINK_DUPLICATES_ONE_ONLY",
> +                                 SEC_LINK_DUPLICATES_ONE_ONLY) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_LINK_DUPLICATES_SAME_SIZE",
> +                                 SEC_LINK_DUPLICATES_SAME_SIZE) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_LINKER_CREATED",
> +                                 SEC_LINKER_CREATED) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_KEEP", SEC_KEEP) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_SMALL_DATA",
> +                                 SEC_SMALL_DATA) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_MERGE", SEC_MERGE) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_STRNGS", SEC_STRINGS) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_GROUP", SEC_GROUP) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_COFF_SHARED_LIBRARY",
> +                                 SEC_COFF_SHARED_LIBRARY) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_ELF_REVERSE_COPY",
> +                                 SEC_ELF_REVERSE_COPY) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_COFF_SHARED",
> +                                 SEC_COFF_SHARED) < 0
> +      || PyModule_AddIntConstant (gdb_module, "SEC_COFF_NOREAD",
> +                                 SEC_COFF_NOREAD) < 0)
> +    return -1;
> ...

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/4] Add SLAB allocator understanding.
  2016-02-01 13:21   ` Kieran Bingham
@ 2016-02-01 22:30     ` Doug Evans
  2016-02-02  2:05       ` Ales Novak
  2016-02-02  8:11       ` Kieran Bingham
  2016-02-02 10:04     ` Vlastimil Babka
  1 sibling, 2 replies; 31+ messages in thread
From: Doug Evans @ 2016-02-01 22:30 UTC (permalink / raw)
  To: Kieran Bingham; +Cc: Ales Novak, gdb-patches, Vlastimil Babka, Jan Kiszka

On Mon, Feb 1, 2016 at 5:21 AM, Kieran Bingham <kieranbingham@gmail.com> wrote:
> This is interesting work!
>
> I had been discussing how we might achieve managing this with Jan @
> FOSDEM yesterday.
>
> I believe a python implementation of this could be possible, and then
> this code can live in the Kernel, and be split across architecture
> specific layers where necessary to implement handling userspace
> application boundaries from the Kernel Awareness.

Keeping application specific code with the application instead of gdb
is definitely a worthy goal.
[one can quibble over whether linux is an application of course,
but that's just terminology]

> I believe that if properly abstracted (which I think it looks like this
> already will be), with kdump as a target layer, we can implement the
> Kernel awareness layers above, so that they can be common to all of our
> use case scenarios.
>
> I have recently proposed creating a gdb.Target object, so that we can
> layer the kernel specific code on top as a higher stratum layer. This
> code can then live in the Kernel, and be version specific there, and
> would then cooperate with the layers below, be that a live target over
> JTAG, or a virtualised qemu/kvm, or a core dump file:

Providing gdb.Target is also a worthy goal.
I hope someone will take this on.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/4] Add SLAB allocator understanding.
  2016-02-01 22:30     ` Doug Evans
@ 2016-02-02  2:05       ` Ales Novak
  2016-02-02  7:22         ` Jan Kiszka
  2016-02-02  8:11       ` Kieran Bingham
  1 sibling, 1 reply; 31+ messages in thread
From: Ales Novak @ 2016-02-02  2:05 UTC (permalink / raw)
  To: Doug Evans; +Cc: Kieran Bingham, gdb-patches, Vlastimil Babka, Jan Kiszka

On 2016-2-1 23:29, Doug Evans wrote:

> On Mon, Feb 1, 2016 at 5:21 AM, Kieran Bingham <kieranbingham@gmail.com> wrote:
>> This is interesting work!
>>
>> I had been discussing how we might achieve managing this with Jan @
>> FOSDEM yesterday.
>>
>> I believe a python implementation of this could be possible, and then
>> this code can live in the Kernel, and be split across architecture
>> specific layers where necessary to implement handling userspace
>> application boundaries from the Kernel Awareness.
>
> Keeping application specific code with the application instead of gdb
> is definitely a worthy goal.
> [one can quibble over whether linux is an application of course,
> but that's just terminology]

Yeah, you're right. Yet if we're talking about the SLAB in particular - 
considering with how many objects simultaneously has this subsystem to 
cope, I'm afraid that adding any extra overhead (e.g. the Pythonish) will 
be just painful.

It's a pitty that gdb cannot be extended dynamically, afaics.

-- 
Ales Novak

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 2/4] Add Jeff Mahoney's py-crash patches.
  2016-02-01 22:23   ` Doug Evans
@ 2016-02-02  2:56     ` Jeff Mahoney
  2016-02-02  8:25       ` Kieran Bingham
  2016-02-03 17:55       ` Jeff Mahoney
  0 siblings, 2 replies; 31+ messages in thread
From: Jeff Mahoney @ 2016-02-02  2:56 UTC (permalink / raw)
  To: Doug Evans, Ales Novak; +Cc: gdb-patches

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 2/1/16 5:22 PM, Doug Evans wrote:
> On Sun, Jan 31, 2016 at 1:44 PM, Ales Novak <alnovak@suse.cz>
> wrote:
>> --- gdb/Makefile.in              |  12 ++ 
>> gdb/python/py-minsymbol.c    | 353
>> +++++++++++++++++++++++++++++++++++++ gdb/python/py-objfile.c
>> |  29 +++- gdb/python/py-section.c      | 401
>> +++++++++++++++++++++++++++++++++++++++++++ 
>> gdb/python/py-symbol.c       |  52 ++++-- 
>> gdb/python/python-internal.h |  14 ++ gdb/python/python.c
>> |   7 +- 7 files changed, 853 insertions(+), 15 deletions(-) 
>> create mode 100644 gdb/python/py-minsymbol.c create mode 100644
>> gdb/python/py-section.c
> 
> 
> Hi.

Hi Doug -

> Part of what this patch is doing is exporting bfd to python. E.g.,
> all the SEC_* constants.
> 
> As a rule we absolutely discourage people from using bfd outside
> of the the binutils+gdb source tree. Either this rule needs to
> change, or I don't think we can allow this patch. I'd be interested
> to hear what others in the community think.

That's unfortunate.  The Linux kernel uses ELF sections for a number
of purposes.  Most notably is the definition of per-cpu variables.
Without the ELF section, we can't resolve the addresses for the
variables.  So, from our perspective, it's a requirement.

> For myself, I would much rather export ELF separately (e.g., a
> separate python API one can use independent of any particular tool,
> including gdb), and then have gdb provide the necessary glue to use
> this API. [I can imagine some compromises being needed, at least
> for now; e.g., it'd be cumbersome to read in all ELF symbols
> twice. But fixing that is just an optimization.]

Ok, that's doable.  As it is, the section code mixes GDB and BFD
pretty heavily.  It shouldn't be too difficult to separate the two out
and push the section stuff into a new BFD python interface and
associate the objfiles with it.

- -Jeff

> 
>> ... +  if (PyModule_AddIntConstant (gdb_module, "SEC_NO_FLAGS",
>> SEC_NO_FLAGS) < 0 +      || PyModule_AddIntConstant (gdb_module,
>> "SEC_ALLOC", SEC_ALLOC) < 0 +      || PyModule_AddIntConstant
>> (gdb_module, "SEC_LOAD", SEC_LOAD) < 0 +      ||
>> PyModule_AddIntConstant (gdb_module, "SEC_RELOC", SEC_RELOC) < 0 
>> +      || PyModule_AddIntConstant (gdb_module, "SEC_READONLY",
>> SEC_READONLY) < 0 +      || PyModule_AddIntConstant (gdb_module,
>> "SEC_CODE", SEC_CODE) < 0 +      || PyModule_AddIntConstant
>> (gdb_module, "SEC_DATA", SEC_DATA) < 0 +      ||
>> PyModule_AddIntConstant (gdb_module, "SEC_ROM", SEC_ROM) < 0 +
>> || PyModule_AddIntConstant (gdb_module, "SEC_CONSTRUCTOR", +
>> SEC_CONSTRUCTOR) < 0 +      || PyModule_AddIntConstant
>> (gdb_module, "SEC_HAS_CONTENTS", +
>> SEC_HAS_CONTENTS) < 0 +      || PyModule_AddIntConstant
>> (gdb_module, "SEC_NEVER_LOAD", +
>> SEC_NEVER_LOAD) < 0 +      || PyModule_AddIntConstant
>> (gdb_module, "SEC_THREAD_LOCAL", +
>> SEC_THREAD_LOCAL) < 0 +      || PyModule_AddIntConstant
>> (gdb_module, "SEC_HAS_GOT_REF", +
>> SEC_HAS_GOT_REF) < 0 +      || PyModule_AddIntConstant
>> (gdb_module, "SEC_IS_COMMON", +
>> SEC_IS_COMMON) < 0 +      || PyModule_AddIntConstant (gdb_module,
>> "SEC_DEBUGGING", +                                 SEC_DEBUGGING)
>> < 0 +      || PyModule_AddIntConstant (gdb_module,
>> "SEC_IN_MEMORY", +                                 SEC_IN_MEMORY)
>> < 0 +      || PyModule_AddIntConstant (gdb_module, "SEC_EXCLUDE",
>> SEC_EXCLUDE) < 0 +      || PyModule_AddIntConstant (gdb_module,
>> "SEC_SORT_ENTRIES", +
>> SEC_SORT_ENTRIES) < 0 +      || PyModule_AddIntConstant
>> (gdb_module, "SEC_LINK_ONCE", +
>> SEC_LINK_ONCE) < 0 +      || PyModule_AddIntConstant (gdb_module,
>> "SEC_LINK_DUPLICATES", +
>> SEC_LINK_DUPLICATES) < 0 +      || PyModule_AddIntConstant
>> (gdb_module, "SEC_LINK_DUPLICATES_DISCARD", +
>> SEC_LINK_DUPLICATES_DISCARD) < 0 +      ||
>> PyModule_AddIntConstant (gdb_module,
>> "SEC_LINK_DUPLICATES_ONE_ONLY", +
>> SEC_LINK_DUPLICATES_ONE_ONLY) < 0 +      ||
>> PyModule_AddIntConstant (gdb_module,
>> "SEC_LINK_DUPLICATES_SAME_SIZE", +
>> SEC_LINK_DUPLICATES_SAME_SIZE) < 0 +      ||
>> PyModule_AddIntConstant (gdb_module, "SEC_LINKER_CREATED", +
>> SEC_LINKER_CREATED) < 0 +      || PyModule_AddIntConstant
>> (gdb_module, "SEC_KEEP", SEC_KEEP) < 0 +      ||
>> PyModule_AddIntConstant (gdb_module, "SEC_SMALL_DATA", +
>> SEC_SMALL_DATA) < 0 +      || PyModule_AddIntConstant
>> (gdb_module, "SEC_MERGE", SEC_MERGE) < 0 +      ||
>> PyModule_AddIntConstant (gdb_module, "SEC_STRNGS", SEC_STRINGS) <
>> 0 +      || PyModule_AddIntConstant (gdb_module, "SEC_GROUP",
>> SEC_GROUP) < 0 +      || PyModule_AddIntConstant (gdb_module,
>> "SEC_COFF_SHARED_LIBRARY", +
>> SEC_COFF_SHARED_LIBRARY) < 0 +      || PyModule_AddIntConstant
>> (gdb_module, "SEC_ELF_REVERSE_COPY", +
>> SEC_ELF_REVERSE_COPY) < 0 +      || PyModule_AddIntConstant
>> (gdb_module, "SEC_COFF_SHARED", +
>> SEC_COFF_SHARED) < 0 +      || PyModule_AddIntConstant
>> (gdb_module, "SEC_COFF_NOREAD", +
>> SEC_COFF_NOREAD) < 0) +    return -1; ...
> 


- -- 
Jeff Mahoney
SUSE Labs
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
Comment: GPGTools - http://gpgtools.org

iQIcBAEBAgAGBQJWsBq4AAoJEB57S2MheeWyiNkP/ihEmdfSciKJ1oZjW8bFM3pb
IopJO86jZWhLMdG5CcDvoc/Mhf/xbGGeCvPgJOVvgdhLOqBR+8H3xT4ot0TzPfba
yCas29vKis2+wLEFBGoFVWXzxhYF6SZ4YjZSlUleD7kGvja2kWAj173Oj8q9xuy5
A88hWy/+bTTxVuaBE4NW9RNy+2kt+XwneP2ccBRdaVK2FUUHk62znEAcs/SmAoeN
20Nq8a24pYl+6T/GM1E2lnA2OdBtadcY3G3zXKGtw3JExaNpytlndvdRaDUUJLFV
0TbAE1WsaK5LzOxZzfgn43MLr5y1APdihvGJN1voSnz6mmnmQ6q008z4M7eKVhf6
GuQAMNW8lI4Ki+WwEIOJHiORDNjummRHrzvI8PY5ci9xbD0JEpf3cQVOwGW9Wlqz
M7tLdREWll4fm85V8W8wdq7ETsMmoL5SUaAGVn2ZucJLoNDif0jNoLsOeee0YPEa
2EjgB7hleE3GkFgpwPa/75FRANO2AnxzDQPCELoJJ1bMRFONMT46A7T3zbfzijwf
+MZWY9h40QUewtLJlqCpVcXWWhNPpNCuzRhwrpKLYAXnT2dBT0zhRRNxKaFzeQo6
YfmFEDWapK97biwKen6/snNSfWpZJAEGprSsU5uqv2iLVzITtlr9euaL9bR9p1fq
geG2m6SRtR9JUzIvUn71
=7HLd
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/4] Add SLAB allocator understanding.
  2016-02-02  2:05       ` Ales Novak
@ 2016-02-02  7:22         ` Jan Kiszka
  2016-02-02 13:22           ` Petr Tesarik
  0 siblings, 1 reply; 31+ messages in thread
From: Jan Kiszka @ 2016-02-02  7:22 UTC (permalink / raw)
  To: Ales Novak, Doug Evans; +Cc: Kieran Bingham, gdb-patches, Vlastimil Babka

On 2016-02-02 03:05, Ales Novak wrote:
> On 2016-2-1 23:29, Doug Evans wrote:
> 
>> On Mon, Feb 1, 2016 at 5:21 AM, Kieran Bingham
>> <kieranbingham@gmail.com> wrote:
>>> This is interesting work!
>>>
>>> I had been discussing how we might achieve managing this with Jan @
>>> FOSDEM yesterday.
>>>
>>> I believe a python implementation of this could be possible, and then
>>> this code can live in the Kernel, and be split across architecture
>>> specific layers where necessary to implement handling userspace
>>> application boundaries from the Kernel Awareness.
>>
>> Keeping application specific code with the application instead of gdb
>> is definitely a worthy goal.
>> [one can quibble over whether linux is an application of course,
>> but that's just terminology]
> 
> Yeah, you're right. Yet if we're talking about the SLAB in particular -
> considering with how many objects simultaneously has this subsystem to
> cope, I'm afraid that adding any extra overhead (e.g. the Pythonish)
> will be just painful.
> 
> It's a pitty that gdb cannot be extended dynamically, afaics.

First, don't be too sceptical before some has tried this. And then there
are still options for optimizations, either on the language side (C
extension to our Python modules, also in-kernel maintained) or more
efficient interfaces for gdb's Python API.

It's definitely worth exploring this first before adding Linux kernel
release specific things to gdb, which is going to be even more painful
to maintain.

Jan

-- 
Siemens AG, Corporate Technology, CT RDA ITP SES-DE
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/4] Add SLAB allocator understanding.
  2016-02-01 22:30     ` Doug Evans
  2016-02-02  2:05       ` Ales Novak
@ 2016-02-02  8:11       ` Kieran Bingham
  1 sibling, 0 replies; 31+ messages in thread
From: Kieran Bingham @ 2016-02-02  8:11 UTC (permalink / raw)
  To: Doug Evans
  Cc: Ales Novak, gdb-patches, Vlastimil Babka, Jan Kiszka, Lee Jones,
	Peter Griffin



On 01/02/16 22:29, Doug Evans wrote:
> On Mon, Feb 1, 2016 at 5:21 AM, Kieran Bingham <kieranbingham@gmail.com> wrote:
>> This is interesting work!
>>
>> I had been discussing how we might achieve managing this with Jan @
>> FOSDEM yesterday.
>>
>> I believe a python implementation of this could be possible, and then
>> this code can live in the Kernel, and be split across architecture
>> specific layers where necessary to implement handling userspace
>> application boundaries from the Kernel Awareness.
> 
> Keeping application specific code with the application instead of gdb
> is definitely a worthy goal.
> [one can quibble over whether linux is an application of course,
> but that's just terminology]

It's just a big fancy application which supports modules, and can talk
to hardware. :D </me ducks to avoid the flying bricks>

> 
>> I believe that if properly abstracted (which I think it looks like this
>> already will be), with kdump as a target layer, we can implement the
>> Kernel awareness layers above, so that they can be common to all of our
>> use case scenarios.
>>
>> I have recently proposed creating a gdb.Target object, so that we can
>> layer the kernel specific code on top as a higher stratum layer. This
>> code can then live in the Kernel, and be version specific there, and
>> would then cooperate with the layers below, be that a live target over
>> JTAG, or a virtualised qemu/kvm, or a core dump file:
> 
> Providing gdb.Target is also a worthy goal.

Perfect, I'm glad to hear this

> I hope someone will take this on.

That's my current Work In Progress.!

--
Kieran

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 2/4] Add Jeff Mahoney's py-crash patches.
  2016-02-02  2:56     ` Jeff Mahoney
@ 2016-02-02  8:25       ` Kieran Bingham
  2016-02-03 17:55       ` Jeff Mahoney
  1 sibling, 0 replies; 31+ messages in thread
From: Kieran Bingham @ 2016-02-02  8:25 UTC (permalink / raw)
  To: Jeff Mahoney, Doug Evans, Ales Novak
  Cc: gdb-patches, Peter Griffin, Lee Jones

On 02/02/16 02:55, Jeff Mahoney wrote:
> On 2/1/16 5:22 PM, Doug Evans wrote:
>> On Sun, Jan 31, 2016 at 1:44 PM, Ales Novak <alnovak@suse.cz>
>> wrote:
>>> --- gdb/Makefile.in              |  12 ++ 
>>> gdb/python/py-minsymbol.c    | 353
>>> +++++++++++++++++++++++++++++++++++++ gdb/python/py-objfile.c
>>> |  29 +++- gdb/python/py-section.c      | 401
>>> +++++++++++++++++++++++++++++++++++++++++++ 
>>> gdb/python/py-symbol.c       |  52 ++++-- 
>>> gdb/python/python-internal.h |  14 ++ gdb/python/python.c
>>> |   7 +- 7 files changed, 853 insertions(+), 15 deletions(-) 
>>> create mode 100644 gdb/python/py-minsymbol.c create mode 100644
>>> gdb/python/py-section.c
> 
> 
>> Hi.
> 
> Hi Doug -
> 
>> Part of what this patch is doing is exporting bfd to python. E.g.,
>> all the SEC_* constants.
> 
>> As a rule we absolutely discourage people from using bfd outside
>> of the the binutils+gdb source tree. Either this rule needs to
>> change, or I don't think we can allow this patch. I'd be interested
>> to hear what others in the community think.
> 
> That's unfortunate.  The Linux kernel uses ELF sections for a number
> of purposes.  Most notably is the definition of per-cpu variables.
> Without the ELF section, we can't resolve the addresses for the
> variables.  So, from our perspective, it's a requirement.

Jeff,

I haven't yet looked into your code specifically to check your per-cpu
implementation detail yet, so I'll just speculate for a moment:

Have you seen that we can obtain per_cpu variable from
linux.git/scripts/gdb/linux/cpus.py ?


def per_cpu(var_ptr, cpu):
    if cpu == -1:
        cpu = get_current_cpu()
    if utils.is_target_arch("sparc:v9"):
        offset = gdb.parse_and_eval(
            "trap_block[{0}].__per_cpu_base".format(str(cpu)))
    else:
        try:
            offset = gdb.parse_and_eval(
                "__per_cpu_offset[{0}]".format(str(cpu)))
        except gdb.error:
            # !CONFIG_SMP case
            offset = 0
    pointer = var_ptr.cast(utils.get_long_type()) + offset
    return pointer.cast(var_ptr.type).dereference()



>> For myself, I would much rather export ELF separately (e.g., a
>> separate python API one can use independent of any particular tool,
>> including gdb), and then have gdb provide the necessary glue to use
>> this API. [I can imagine some compromises being needed, at least
>> for now; e.g., it'd be cumbersome to read in all ELF symbols
>> twice. But fixing that is just an optimization.]
> 
> Ok, that's doable.  As it is, the section code mixes GDB and BFD
> pretty heavily.  It shouldn't be too difficult to separate the two out
> and push the section stuff into a new BFD python interface and
> associate the objfiles with it.

Some of our further work (/stretch goals) on Linux Kernel Awareness,
will also utilise this so I will be interested to see how it goes.

> -Jeff
> 
> 
>>> ... +  if (PyModule_AddIntConstant (gdb_module, "SEC_NO_FLAGS",
>>> SEC_NO_FLAGS) < 0 +      || PyModule_AddIntConstant (gdb_module,
>>> "SEC_ALLOC", SEC_ALLOC) < 0 +      || PyModule_AddIntConstant
>>> (gdb_module, "SEC_LOAD", SEC_LOAD) < 0 +      ||
>>> PyModule_AddIntConstant (gdb_module, "SEC_RELOC", SEC_RELOC) < 0 
>>> +      || PyModule_AddIntConstant (gdb_module, "SEC_READONLY",
>>> SEC_READONLY) < 0 +      || PyModule_AddIntConstant (gdb_module,
>>> "SEC_CODE", SEC_CODE) < 0 +      || PyModule_AddIntConstant
>>> (gdb_module, "SEC_DATA", SEC_DATA) < 0 +      ||
>>> PyModule_AddIntConstant (gdb_module, "SEC_ROM", SEC_ROM) < 0 +
>>> || PyModule_AddIntConstant (gdb_module, "SEC_CONSTRUCTOR", +
>>> SEC_CONSTRUCTOR) < 0 +      || PyModule_AddIntConstant
>>> (gdb_module, "SEC_HAS_CONTENTS", +
>>> SEC_HAS_CONTENTS) < 0 +      || PyModule_AddIntConstant
>>> (gdb_module, "SEC_NEVER_LOAD", +
>>> SEC_NEVER_LOAD) < 0 +      || PyModule_AddIntConstant
>>> (gdb_module, "SEC_THREAD_LOCAL", +
>>> SEC_THREAD_LOCAL) < 0 +      || PyModule_AddIntConstant
>>> (gdb_module, "SEC_HAS_GOT_REF", +
>>> SEC_HAS_GOT_REF) < 0 +      || PyModule_AddIntConstant
>>> (gdb_module, "SEC_IS_COMMON", +
>>> SEC_IS_COMMON) < 0 +      || PyModule_AddIntConstant (gdb_module,
>>> "SEC_DEBUGGING", +                                 SEC_DEBUGGING)
>>> < 0 +      || PyModule_AddIntConstant (gdb_module,
>>> "SEC_IN_MEMORY", +                                 SEC_IN_MEMORY)
>>> < 0 +      || PyModule_AddIntConstant (gdb_module, "SEC_EXCLUDE",
>>> SEC_EXCLUDE) < 0 +      || PyModule_AddIntConstant (gdb_module,
>>> "SEC_SORT_ENTRIES", +
>>> SEC_SORT_ENTRIES) < 0 +      || PyModule_AddIntConstant
>>> (gdb_module, "SEC_LINK_ONCE", +
>>> SEC_LINK_ONCE) < 0 +      || PyModule_AddIntConstant (gdb_module,
>>> "SEC_LINK_DUPLICATES", +
>>> SEC_LINK_DUPLICATES) < 0 +      || PyModule_AddIntConstant
>>> (gdb_module, "SEC_LINK_DUPLICATES_DISCARD", +
>>> SEC_LINK_DUPLICATES_DISCARD) < 0 +      ||
>>> PyModule_AddIntConstant (gdb_module,
>>> "SEC_LINK_DUPLICATES_ONE_ONLY", +
>>> SEC_LINK_DUPLICATES_ONE_ONLY) < 0 +      ||
>>> PyModule_AddIntConstant (gdb_module,
>>> "SEC_LINK_DUPLICATES_SAME_SIZE", +
>>> SEC_LINK_DUPLICATES_SAME_SIZE) < 0 +      ||
>>> PyModule_AddIntConstant (gdb_module, "SEC_LINKER_CREATED", +
>>> SEC_LINKER_CREATED) < 0 +      || PyModule_AddIntConstant
>>> (gdb_module, "SEC_KEEP", SEC_KEEP) < 0 +      ||
>>> PyModule_AddIntConstant (gdb_module, "SEC_SMALL_DATA", +
>>> SEC_SMALL_DATA) < 0 +      || PyModule_AddIntConstant
>>> (gdb_module, "SEC_MERGE", SEC_MERGE) < 0 +      ||
>>> PyModule_AddIntConstant (gdb_module, "SEC_STRNGS", SEC_STRINGS) <
>>> 0 +      || PyModule_AddIntConstant (gdb_module, "SEC_GROUP",
>>> SEC_GROUP) < 0 +      || PyModule_AddIntConstant (gdb_module,
>>> "SEC_COFF_SHARED_LIBRARY", +
>>> SEC_COFF_SHARED_LIBRARY) < 0 +      || PyModule_AddIntConstant
>>> (gdb_module, "SEC_ELF_REVERSE_COPY", +
>>> SEC_ELF_REVERSE_COPY) < 0 +      || PyModule_AddIntConstant
>>> (gdb_module, "SEC_COFF_SHARED", +
>>> SEC_COFF_SHARED) < 0 +      || PyModule_AddIntConstant
>>> (gdb_module, "SEC_COFF_NOREAD", +
>>> SEC_COFF_NOREAD) < 0) +    return -1; ...
> 
> 
> 
> 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Enable gdb to open Linux kernel dumps
  2016-02-01 15:01       ` Jeff Mahoney
@ 2016-02-02  9:12         ` Kieran Bingham
  2016-02-10  3:24         ` Jeff Mahoney
  1 sibling, 0 replies; 31+ messages in thread
From: Kieran Bingham @ 2016-02-02  9:12 UTC (permalink / raw)
  To: Jeff Mahoney, Ales Novak, Kieran Bingham
  Cc: gdb-patches, Peter Griffin, Lee Jones

On 01/02/16 15:01, Jeff Mahoney wrote:
> On 2/1/16 9:32 AM, Ales Novak wrote:
>> On 2016-2-1 12:51, Kieran Bingham wrote:
> 
>>>
>>> On 01/02/16 11:27, Kieran Bingham wrote:
>>>> Hi Ales,
>>>>
>>>> I'm just checking out your tree now to try locally.
>>>>
>>>> It sounds like there is a high level of cross over in our work,
>>>> but I believe our work can complement each other's if we work
>>>> together.
> 
>> Yes. Our primary intention is to open kdumps (i.e. dead images of
>> the fallen kernels), but what can be shared between live and dead
>> kernel debugging, should be shared...
> 
>>>> On 31/01/16 21:44, Ales Novak wrote:
>>>>> Following patches are adding basic ability to access Linux
>>>>> kernel dumps using the libkdumpfile library. They're creating
>>>>> new target "kdump", so all one has to do is to provide
>>>>> appropriate debuginfo and then run "target kdump
>>>>> /path/to/vmcore".
>>>>>
>>>>> The tasks of the dumped kernel are mapped to threads in gdb.
>>>>>
>>>>> Except for that, there's a code adding understanding of Linux
>>>>> SLAB memory allocator, which means we can tell for the given
>>>>> address to which SLAB does it belong, or list objects for
>>>>> give SLAB name - and more.
>>>>>
>>>>> Patches are against "gdb-7.10-release" (but will apply
>>>>> elsewhere).
>>>>>
>>>>> Note: registers of task are fetched accordingly - either from
>>>>> the dump metadata (the active tasks) or from their stacks. It
>>>>> should be noted that as this mechanism varies amongst the
>>>>> kernel versions and configurations, my naive implementation
>>>>> currently covers only the dumps I encounter, handling of
>>>>> different kernel versions is to be added.
>>>> In the work that I am doing, I had expected this to be done in
>>>> python for exactly this reason. The kernel version specifics,
>>>> (and architecture specifics) can then live alongside their
>>>> respective trees.
>>>>> In the near future, our plan is to remove the clumsy C-code
>>>>> handling this and reimplement it in Python - only the binding
>>>>> to certain gdb structures (e.g. thread, regcache) has to be
>>>>> added. A colleague of mine is already working on that.
>>>> This sounds exactly like the work I am doing right now. Could
>>>> you pass on my details to your colleague so we can discuss?
>>>
>>> Aha, or is your colleague Andreas Arnez? I'm just about to reply
>>> to his mail over on gbd@ next.
> 
>> No, it's Jeff Mahoney. His current efforts, which include Python
>> binding to threads' regcaches and more, are at:
> 
>> https://jeffm.io/git/cgit.cgi/gnu/binutils-gdb/log/
> 
>> And yes, you're right I've incorrectly removed autorship from some
>> of his older patches (which in fact are not necessary for the
>> current gdb-kdump to work, they are extending the Python binding).
> 
>> And as you've already found, his older patches are at:
> 
>> https://github.com/jeffmahoney/py-crash
> 
> Hi guys -
> 
> Ales gave me the heads up that you were discussing these.  The github
> repo is old and I haven't touched it in a year or so.  The link to my
> git server is the active one, but I should be clear that this is
> currently a WIP from my perspective.  I've been doing my work in the
> rel-7.10.1-kdump branch, which is based on the gdb-7.10.1-release tag,
> plus some SUSE patches to handle build-ids and external debuginfo files.

Of course, unstable branches are expected at this point. Thanks for the
reference.

Have you used the kernel python commands at all (CONFIG_GDB_SCRIPTS)? I
have implemented a few recently, although they're not quite finished yet
(mainly just lx-interrupts left to complete, the radix-tree parser
doesn't work yet)

The expected working commands include:
  lx-version
     (although I think we've discovered a bug in gdb.Value.string())
  lx-cmdline
  lx-iomem
  lx-ioports
  lx-meminfo
  lx-mounts

I'd love to hear if they work for you on a non-running target, or if
not, I can look at fixing them up.

I believe these commands are probably even more useful on non-running
targets than a running target!

My python work lives at:
https://git.linaro.org/people/kieran.bingham/linux.git gdb-patches



> This branch is subject to rebasing as I make progess, but there should
> be a stable base underneath it that I can condense and put into a
> separate branch for public consumption.

Of course the same applies to my branches I'm afraid :)
--
Kieran

> 
> -Jeff
> 
> 
>>>
>>>
>>>
>>>>
>>>> I recently made a posting on gdb@ suggesting the addition of a 
>>>> gdb.Target object to work towards implementing this, and I have
>>>> been liasing with Jan Kiszka to manage the Linux/scripts/gdb/
>>>> integration.
>>>>
>>>>
>>>>
>>>>> The github home of these patches is at:
>>>>>
>>>>> https://github.com/alesax/gdb-kdump/tree/for-next
>>>>>
>>>>> libkdumpfile lives at:
>>>>>
>>>>> https://github.com/ptesarik/libkdumpfile
>>>>>
>>>>> Fork adding the SLAB support lives at:
>>>>>
>>>>> https://github.com/tehcaster/gdb-kdump/tree/slab-support
>>>>>
>>>>>
>>>> Regards
>>>>
>>>> Kieran Bingham
>>>>
>>>
> 
> 
> 
> 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/4] Add SLAB allocator understanding.
  2016-02-01 13:21   ` Kieran Bingham
  2016-02-01 22:30     ` Doug Evans
@ 2016-02-02 10:04     ` Vlastimil Babka
  1 sibling, 0 replies; 31+ messages in thread
From: Vlastimil Babka @ 2016-02-02 10:04 UTC (permalink / raw)
  To: Kieran Bingham, Ales Novak, gdb-patches; +Cc: Jan Kiszka

On 02/01/2016 02:21 PM, Kieran Bingham wrote:
> This is interesting work!
>
> I had been discussing how we might achieve managing this with Jan @
> FOSDEM yesterday.
>
> I believe a python implementation of this could be possible, and then
> this code can live in the Kernel, and be split across architecture
> specific layers where necessary to implement handling userspace
> application boundaries from the Kernel Awareness.

Hi,

I understand that the idea of python scripts living in the kernel tree 
looks desirable, but I see several practical drawbacks. My main goal 
with this is to have a better replacement for the crash [1] tool for 
kernel crash dump analysis. The tool supports dumps from a range of 
kernel versions, and so should the replacement. We regularly deal with 
crash dumps from 3.0-based and newer kernels, so backporting some 
kernel-version-specific python scripts to those kernel versions (or even 
older) is infeasible. Then we would have to assume that any kernel patch 
author changing a subsystem doesn't forget to update the in-kernel 
scripts, otherwise they easily get out of sync in the git history. 
Lastly, it's bit more comfortable if the only input you need is the 
dump, vmlinux and vmlinux.debug, without having to checkout scripts from 
git.

So I believe it's better if the tool could understand and work with a 
range of kernel versions by itself, like crash. The split between 
functionality in C and python is a separate question. I understand you 
wouldn't want to add all the required knowledge into gdb proper, so what 
other options are there? Some kind contrib/python/kernel directory for 
the python scripts? (but not version specific?). How can we similarly 
separate the required C code, if it turns out that doing *everything* in 
python, wrapping only the lowest-level gdb concepts would be too slow?

Thanks,
Vlastimil

[1] https://people.redhat.com/anderson/

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/4] Add SLAB allocator understanding.
  2016-02-02  7:22         ` Jan Kiszka
@ 2016-02-02 13:22           ` Petr Tesarik
  2016-02-02 14:42             ` Jeff Mahoney
  0 siblings, 1 reply; 31+ messages in thread
From: Petr Tesarik @ 2016-02-02 13:22 UTC (permalink / raw)
  To: Jan Kiszka
  Cc: Ales Novak, Doug Evans, Kieran Bingham, gdb-patches, Vlastimil Babka

On Tue, 2 Feb 2016 08:22:25 +0100
Jan Kiszka <jan.kiszka@siemens.com> wrote:

> On 2016-02-02 03:05, Ales Novak wrote:
> > On 2016-2-1 23:29, Doug Evans wrote:
> > 
>[...]
> >> Keeping application specific code with the application instead of gdb
> >> is definitely a worthy goal.
> >> [one can quibble over whether linux is an application of course,
> >> but that's just terminology]
> > 
> > Yeah, you're right. Yet if we're talking about the SLAB in particular -
> > considering with how many objects simultaneously has this subsystem to
> > cope, I'm afraid that adding any extra overhead (e.g. the Pythonish)
> > will be just painful.
> > 
> > It's a pitty that gdb cannot be extended dynamically, afaics.
> 
> First, don't be too sceptical before some has tried this. And then there
> are still options for optimizations, either on the language side (C
> extension to our Python modules, also in-kernel maintained) or more
> efficient interfaces for gdb's Python API.
> 
> It's definitely worth exploring this first before adding Linux kernel
> release specific things to gdb, which is going to be even more painful
> to maintain.

I agree that putting Linux-specific code into the GDB main project is a
bit unfortunate. But this indeed happens because there is no way to add
an external module to GDB. In effect, there is little choice: all code
must be either accepted by the (monolithic) GDB project, or it must be
maintained as a custom out-of-tree patch.

Now, maintaining out-of-tree code is just too much pain. This is (in my
opinion) the main reason people are so excited about Python scripting:
it's the only available stable API that can be used to enhance GDB with
things that do not belong to the core GDB. Plus, this API is incomplete
(as evidenced by Jeff's patch set), and extending it is definitely more
work than exporting existing C functions for use by modules, slowing
down further development of GDB.

Note that this limitation is more political than technical, but this
fact probably only means it's less likely to change...

Just my two cents,
Petr T

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/4] Add SLAB allocator understanding.
  2016-02-02 13:22           ` Petr Tesarik
@ 2016-02-02 14:42             ` Jeff Mahoney
  0 siblings, 0 replies; 31+ messages in thread
From: Jeff Mahoney @ 2016-02-02 14:42 UTC (permalink / raw)
  To: Petr Tesarik, Jan Kiszka
  Cc: Ales Novak, Doug Evans, Kieran Bingham, gdb-patches, Vlastimil Babka

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 2/2/16 8:21 AM, Petr Tesarik wrote:
> On Tue, 2 Feb 2016 08:22:25 +0100 Jan Kiszka
> <jan.kiszka@siemens.com> wrote:
> 
>> On 2016-02-02 03:05, Ales Novak wrote:
>>> On 2016-2-1 23:29, Doug Evans wrote:
>>> 
>> [...]
>>>> Keeping application specific code with the application
>>>> instead of gdb is definitely a worthy goal. [one can quibble
>>>> over whether linux is an application of course, but that's
>>>> just terminology]
>>> 
>>> Yeah, you're right. Yet if we're talking about the SLAB in
>>> particular - considering with how many objects simultaneously
>>> has this subsystem to cope, I'm afraid that adding any extra
>>> overhead (e.g. the Pythonish) will be just painful.
>>> 
>>> It's a pitty that gdb cannot be extended dynamically, afaics.
>> 
>> First, don't be too sceptical before some has tried this. And
>> then there are still options for optimizations, either on the
>> language side (C extension to our Python modules, also in-kernel
>> maintained) or more efficient interfaces for gdb's Python API.
>> 
>> It's definitely worth exploring this first before adding Linux
>> kernel release specific things to gdb, which is going to be even
>> more painful to maintain.
> 
> I agree that putting Linux-specific code into the GDB main project
> is a bit unfortunate. But this indeed happens because there is no
> way to add an external module to GDB. In effect, there is little
> choice: all code must be either accepted by the (monolithic) GDB
> project, or it must be maintained as a custom out-of-tree patch.
> 
> Now, maintaining out-of-tree code is just too much pain. This is
> (in my opinion) the main reason people are so excited about Python
> scripting: it's the only available stable API that can be used to
> enhance GDB with things that do not belong to the core GDB. Plus,
> this API is incomplete (as evidenced by Jeff's patch set), and
> extending it is definitely more work than exporting existing C
> functions for use by modules, slowing down further development of
> GDB.

I only partially agree here.  Using Python to extend GDB to support
e.g. libkdumpfile would be a workaround.  I looked into it briefly and
decided against it. Extending the Python API has been an investment,
though.  Nearly everything I'm doing in the GDB code is generic.  I
really do want to have most of the functionality we have now with
crash implemented as Python modules.  Extensions in crash need to be
compiled in, written in sial, or use the grafted-on python plugin for
it.  All these options are terrible and not at all conducive to
collaborative, iterative improvement.  As we build up more
infrastructure, it becomes a lot easier for people to write quick
commands to automate a lot of the work we end up forced to do to get
crash to do what we need.

- -Jeff

- -- 
Jeff Mahoney
SUSE Labs
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
Comment: GPGTools - http://gpgtools.org

iQIcBAEBAgAGBQJWsMA1AAoJEB57S2MheeWyKQYP/R2kZ2vTYWtom789eS/45FaY
mIWD8EsBlShfF+2ja8EdOHs6TYrkMEGTzjQIhoCJXttegwKY2H8/GXHfODuelaa6
pX5pPWNkV4v1G933NfOsfxJOEecdMAwM8MI+3HFl0I5cjw+/2xXhoEUg6+ZburlT
dU1lljlQwD3+wK5Q4L/w1jBebsTUKDAvJAuoHwVNFygKBkp0h6jqf310J7PfpzR/
S/gcoSsORUrla6pdPEaFAG3lFh6mqlNlaLPKF1GghP2/RdOY0f/Ud81l36Zlu5QI
D8YYtouR3qRzAf8XEV5qFMPUQDRL2c5U6JeLkJ06HxtgK44xV+l2AZar6YNezsaM
zLWyYN42x7W4DDVTrWVqgx9hkrhWYdLxavY48+n8DZncqSraM6F3YorghTfUSGnF
LutLEaBFHHURQfqFDAo8tEN8oT56YAKqRO5NOM2I9vZUdyizVaoCXov5fxiJHBpr
urq2vl8GCXQbg5QLUbR+Fj5E3XJZbe06OT7Oy48hECFRhwEotKqggcKgmJx2/v2H
dk1lLXaKI170OajZz91FVMLrOGARrWe1ZRMUa15slhIyeRUuuru7qH9jbEpBFV+b
LSpAax14+/pWKMFWZrkZglxAtj6QDfFzRz+zVCDzYHLKpO+2Jct51Oas69jL7bnl
xNsosBME7X/IsKjmIfmn
=g4Fu
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 2/4] Add Jeff Mahoney's py-crash patches.
  2016-02-02  2:56     ` Jeff Mahoney
  2016-02-02  8:25       ` Kieran Bingham
@ 2016-02-03 17:55       ` Jeff Mahoney
  2016-02-03 18:31         ` Doug Evans
  1 sibling, 1 reply; 31+ messages in thread
From: Jeff Mahoney @ 2016-02-03 17:55 UTC (permalink / raw)
  To: Doug Evans, Ales Novak; +Cc: gdb-patches

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 2/1/16 9:55 PM, Jeff Mahoney wrote:
> On 2/1/16 5:22 PM, Doug Evans wrote:
>> On Sun, Jan 31, 2016 at 1:44 PM, Ales Novak <alnovak@suse.cz> 
>> wrote:
>>> --- gdb/Makefile.in              |  12 ++ 
>>> gdb/python/py-minsymbol.c    | 353 
>>> +++++++++++++++++++++++++++++++++++++ gdb/python/py-objfile.c |
>>> 29 +++- gdb/python/py-section.c      | 401 
>>> +++++++++++++++++++++++++++++++++++++++++++ 
>>> gdb/python/py-symbol.c       |  52 ++++-- 
>>> gdb/python/python-internal.h |  14 ++ gdb/python/python.c |   7
>>> +- 7 files changed, 853 insertions(+), 15 deletions(-) create
>>> mode 100644 gdb/python/py-minsymbol.c create mode 100644 
>>> gdb/python/py-section.c
> 
> 
>> Hi.
> 
> Hi Doug -
> 
>> Part of what this patch is doing is exporting bfd to python.
>> E.g., all the SEC_* constants.
> 
>> As a rule we absolutely discourage people from using bfd outside 
>> of the the binutils+gdb source tree. Either this rule needs to 
>> change, or I don't think we can allow this patch. I'd be
>> interested to hear what others in the community think.
> 
> That's unfortunate.  The Linux kernel uses ELF sections for a
> number of purposes.  Most notably is the definition of per-cpu
> variables. Without the ELF section, we can't resolve the addresses
> for the variables.  So, from our perspective, it's a requirement.
> 
>> For myself, I would much rather export ELF separately (e.g., a 
>> separate python API one can use independent of any particular
>> tool, including gdb), and then have gdb provide the necessary
>> glue to use this API. [I can imagine some compromises being
>> needed, at least for now; e.g., it'd be cumbersome to read in all
>> ELF symbols twice. But fixing that is just an optimization.]
> 
> Ok, that's doable.  As it is, the section code mixes GDB and BFD 
> pretty heavily.  It shouldn't be too difficult to separate the two
> out and push the section stuff into a new BFD python interface and 
> associate the objfiles with it.

And here's what I've come up with.  Does this constitute enough of a
separation?  It /should/ cross over into the BFD code in the same way
that the GDB code does: As soon as we hit a bfd object or a
bfd_section object, we call into bfd's new python API to generate the
objects.

https://jeffm.io/git/cgit.cgi/gnu/binutils-gdb/log/?h=gdb/python-bfd

For the fully-integrated kdump work, use the python-bfd-kdump branch
(or SUSE folks, python-bfd-kdump-buildid will pick up the separate
debuginfos as we usually expect).

- -Jeff

>>> ... +  if (PyModule_AddIntConstant (gdb_module,
>>> "SEC_NO_FLAGS", SEC_NO_FLAGS) < 0 +      ||
>>> PyModule_AddIntConstant (gdb_module, "SEC_ALLOC", SEC_ALLOC) <
>>> 0 +      || PyModule_AddIntConstant (gdb_module, "SEC_LOAD",
>>> SEC_LOAD) < 0 +      || PyModule_AddIntConstant (gdb_module,
>>> "SEC_RELOC", SEC_RELOC) < 0 +      || PyModule_AddIntConstant
>>> (gdb_module, "SEC_READONLY", SEC_READONLY) < 0 +      ||
>>> PyModule_AddIntConstant (gdb_module, "SEC_CODE", SEC_CODE) < 0
>>> +      || PyModule_AddIntConstant (gdb_module, "SEC_DATA",
>>> SEC_DATA) < 0 +      || PyModule_AddIntConstant (gdb_module,
>>> "SEC_ROM", SEC_ROM) < 0 + || PyModule_AddIntConstant
>>> (gdb_module, "SEC_CONSTRUCTOR", + SEC_CONSTRUCTOR) < 0 +
>>> || PyModule_AddIntConstant (gdb_module, "SEC_HAS_CONTENTS", + 
>>> SEC_HAS_CONTENTS) < 0 +      || PyModule_AddIntConstant 
>>> (gdb_module, "SEC_NEVER_LOAD", + SEC_NEVER_LOAD) < 0 +      ||
>>> PyModule_AddIntConstant (gdb_module, "SEC_THREAD_LOCAL", + 
>>> SEC_THREAD_LOCAL) < 0 +      || PyModule_AddIntConstant 
>>> (gdb_module, "SEC_HAS_GOT_REF", + SEC_HAS_GOT_REF) < 0 +
>>> || PyModule_AddIntConstant (gdb_module, "SEC_IS_COMMON", + 
>>> SEC_IS_COMMON) < 0 +      || PyModule_AddIntConstant
>>> (gdb_module, "SEC_DEBUGGING", +
>>> SEC_DEBUGGING) < 0 +      || PyModule_AddIntConstant
>>> (gdb_module, "SEC_IN_MEMORY", +
>>> SEC_IN_MEMORY) < 0 +      || PyModule_AddIntConstant
>>> (gdb_module, "SEC_EXCLUDE", SEC_EXCLUDE) < 0 +      ||
>>> PyModule_AddIntConstant (gdb_module, "SEC_SORT_ENTRIES", + 
>>> SEC_SORT_ENTRIES) < 0 +      || PyModule_AddIntConstant 
>>> (gdb_module, "SEC_LINK_ONCE", + SEC_LINK_ONCE) < 0 +      ||
>>> PyModule_AddIntConstant (gdb_module, "SEC_LINK_DUPLICATES", + 
>>> SEC_LINK_DUPLICATES) < 0 +      || PyModule_AddIntConstant 
>>> (gdb_module, "SEC_LINK_DUPLICATES_DISCARD", + 
>>> SEC_LINK_DUPLICATES_DISCARD) < 0 +      || 
>>> PyModule_AddIntConstant (gdb_module, 
>>> "SEC_LINK_DUPLICATES_ONE_ONLY", + SEC_LINK_DUPLICATES_ONE_ONLY)
>>> < 0 +      || PyModule_AddIntConstant (gdb_module, 
>>> "SEC_LINK_DUPLICATES_SAME_SIZE", + 
>>> SEC_LINK_DUPLICATES_SAME_SIZE) < 0 +      || 
>>> PyModule_AddIntConstant (gdb_module, "SEC_LINKER_CREATED", + 
>>> SEC_LINKER_CREATED) < 0 +      || PyModule_AddIntConstant 
>>> (gdb_module, "SEC_KEEP", SEC_KEEP) < 0 +      || 
>>> PyModule_AddIntConstant (gdb_module, "SEC_SMALL_DATA", + 
>>> SEC_SMALL_DATA) < 0 +      || PyModule_AddIntConstant 
>>> (gdb_module, "SEC_MERGE", SEC_MERGE) < 0 +      || 
>>> PyModule_AddIntConstant (gdb_module, "SEC_STRNGS", SEC_STRINGS)
>>> < 0 +      || PyModule_AddIntConstant (gdb_module,
>>> "SEC_GROUP", SEC_GROUP) < 0 +      || PyModule_AddIntConstant
>>> (gdb_module, "SEC_COFF_SHARED_LIBRARY", + 
>>> SEC_COFF_SHARED_LIBRARY) < 0 +      || PyModule_AddIntConstant 
>>> (gdb_module, "SEC_ELF_REVERSE_COPY", + SEC_ELF_REVERSE_COPY) <
>>> 0 +      || PyModule_AddIntConstant (gdb_module,
>>> "SEC_COFF_SHARED", + SEC_COFF_SHARED) < 0 +      ||
>>> PyModule_AddIntConstant (gdb_module, "SEC_COFF_NOREAD", + 
>>> SEC_COFF_NOREAD) < 0) +    return -1; ...
> 
> 
> 
> 

- -- 
Jeff Mahoney
SUSE Labs
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
Comment: GPGTools - http://gpgtools.org

iQIcBAEBAgAGBQJWsj8aAAoJEB57S2MheeWyhQAP/jVlYhIHlLr/h1j1F64PR5mP
i2+CRVorxMcRwVRMz8ymFJBzAoZoxzlEtjnX9b9rdfVnPd/wT8qmT0BBbj1FY1a6
kIPF5a6Wea6kJcbLeZ/yeyml9hhz2SULr9Lc0IqIQ20BGcPwPQkMqqEPgHZbalxZ
HmboZWz4o9QObOUOdXyyBxgmU148Pi7WsnD3dmD+qi/cLc5hppDy7/xpv0H6cDRw
mBlKVpvf8+Yv+yNTSx39k51XW7TKko13GxJ5Sdm1/zC+PGIHbRgKHvN4gYc5Fr4d
kd8oR9w/S8DQRNypU4lUPwPVXLz7Njy6S2Kmz8rPobh6RvEpRv8koLZUjUzimcC8
BTKcwHwlI8eQCZ/t+OB7uMJLKte0oqItg+ynPuD1FWsWSGI2cqWmOHL3NBc4fU5/
RO/7u52Fr21gKwQ0EbaSZeEC0knAGMxw8gWmTaZaArkbSAIlPR2wsfEgVekFBZLb
7gyFY8ut6k641oivGumpTMVS6LhQxTxOMD41oTrCO7VWintuOhNmSjh0rkjy8Ax8
PdG16Pc6EI3th6LPKE28nsMKxLCXr2N5wOLLETWcAKDcgXgIa4aHTLTVMdy5MIuu
4C6AOoKnjNwlypqa36uuWAy9dzzUuY7zJrIgRiGesSDpmtlCxsDyxsqJicZZu5tF
JKA7TkwqGAJsmyw7tNNe
=DH0z
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 2/4] Add Jeff Mahoney's py-crash patches.
  2016-02-03 17:55       ` Jeff Mahoney
@ 2016-02-03 18:31         ` Doug Evans
  2016-02-03 19:29           ` Jeff Mahoney
  2016-02-04 17:25           ` Petr Tesarik
  0 siblings, 2 replies; 31+ messages in thread
From: Doug Evans @ 2016-02-03 18:31 UTC (permalink / raw)
  To: Jeff Mahoney; +Cc: Ales Novak, gdb-patches

On Wed, Feb 3, 2016 at 9:55 AM, Jeff Mahoney <jeffm@suse.com> wrote:
>...
>>> Hi.
>>
>> Hi Doug -
>>
>>> Part of what this patch is doing is exporting bfd to python.
>>> E.g., all the SEC_* constants.
>>
>>> As a rule we absolutely discourage people from using bfd outside
>>> of the the binutils+gdb source tree. Either this rule needs to
>>> change, or I don't think we can allow this patch. I'd be
>>> interested to hear what others in the community think.
>>
>> That's unfortunate.  The Linux kernel uses ELF sections for a
>> number of purposes.  Most notably is the definition of per-cpu
>> variables. Without the ELF section, we can't resolve the addresses
>> for the variables.  So, from our perspective, it's a requirement.
>>
>>> For myself, I would much rather export ELF separately (e.g., a
>>> separate python API one can use independent of any particular
>>> tool, including gdb), and then have gdb provide the necessary
>>> glue to use this API. [I can imagine some compromises being
>>> needed, at least for now; e.g., it'd be cumbersome to read in all
>>> ELF symbols twice. But fixing that is just an optimization.]
>>
>> Ok, that's doable.  As it is, the section code mixes GDB and BFD
>> pretty heavily.  It shouldn't be too difficult to separate the two
>> out and push the section stuff into a new BFD python interface and
>> associate the objfiles with it.
>
> And here's what I've come up with.  Does this constitute enough of a
> separation?  It /should/ cross over into the BFD code in the same way
> that the GDB code does: As soon as we hit a bfd object or a
> bfd_section object, we call into bfd's new python API to generate the
> objects.
>
> https://jeffm.io/git/cgit.cgi/gnu/binutils-gdb/log/?h=gdb/python-bfd
>
> For the fully-integrated kdump work, use the python-bfd-kdump branch
> (or SUSE folks, python-bfd-kdump-buildid will pick up the separate
> debuginfos as we usually expect).

Separation isn't the issue, unfortunately.
The issue is that we cannot export bfd to python, period.

I'm certainly open to others convincing me I'm wrong.
But that is my understanding.
What we can do is export ELF, and that is what I'd like to see.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 2/4] Add Jeff Mahoney's py-crash patches.
  2016-02-03 18:31         ` Doug Evans
@ 2016-02-03 19:29           ` Jeff Mahoney
  2016-02-04 17:25           ` Petr Tesarik
  1 sibling, 0 replies; 31+ messages in thread
From: Jeff Mahoney @ 2016-02-03 19:29 UTC (permalink / raw)
  To: Doug Evans; +Cc: Ales Novak, gdb-patches

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 2/3/16 1:30 PM, Doug Evans wrote:
> On Wed, Feb 3, 2016 at 9:55 AM, Jeff Mahoney <jeffm@suse.com>
> wrote:
>> ...
>>>> Hi.
>>> 
>>> Hi Doug -
>>> 
>>>> Part of what this patch is doing is exporting bfd to python. 
>>>> E.g., all the SEC_* constants.
>>> 
>>>> As a rule we absolutely discourage people from using bfd
>>>> outside of the the binutils+gdb source tree. Either this rule
>>>> needs to change, or I don't think we can allow this patch.
>>>> I'd be interested to hear what others in the community
>>>> think.
>>> 
>>> That's unfortunate.  The Linux kernel uses ELF sections for a 
>>> number of purposes.  Most notably is the definition of per-cpu 
>>> variables. Without the ELF section, we can't resolve the
>>> addresses for the variables.  So, from our perspective, it's a
>>> requirement.
>>> 
>>>> For myself, I would much rather export ELF separately (e.g.,
>>>> a separate python API one can use independent of any
>>>> particular tool, including gdb), and then have gdb provide
>>>> the necessary glue to use this API. [I can imagine some
>>>> compromises being needed, at least for now; e.g., it'd be
>>>> cumbersome to read in all ELF symbols twice. But fixing that
>>>> is just an optimization.]
>>> 
>>> Ok, that's doable.  As it is, the section code mixes GDB and
>>> BFD pretty heavily.  It shouldn't be too difficult to separate
>>> the two out and push the section stuff into a new BFD python
>>> interface and associate the objfiles with it.
>> 
>> And here's what I've come up with.  Does this constitute enough
>> of a separation?  It /should/ cross over into the BFD code in the
>> same way that the GDB code does: As soon as we hit a bfd object
>> or a bfd_section object, we call into bfd's new python API to
>> generate the objects.
>> 
>> https://jeffm.io/git/cgit.cgi/gnu/binutils-gdb/log/?h=gdb/python-bfd
>>
>>
>> 
For the fully-integrated kdump work, use the python-bfd-kdump branch
>> (or SUSE folks, python-bfd-kdump-buildid will pick up the
>> separate debuginfos as we usually expect).
> 
> Separation isn't the issue, unfortunately. The issue is that we
> cannot export bfd to python, period.
> 
> I'm certainly open to others convincing me I'm wrong. But that is
> my understanding. What we can do is export ELF, and that is what
> I'd like to see.

Ok, so looking at this again, I don't need full section information.
I just need a name.   Would it be acceptable to just export the name
of the section via gdb.Symbol and my new gdb.MinSymbol instead?

- -Jeff

- -- 
Jeff Mahoney
SUSE Labs
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
Comment: GPGTools - http://gpgtools.org

iQIcBAEBAgAGBQJWslUlAAoJEB57S2MheeWy2KYP/0t+pzbrp9blhgMF2JBKkqhD
ZM+RbVZFaQysJdINqeTZcF9lYkg7f0eJ0Xi6+3/6+zniTDAKtvxIHlgq15cO1itM
XrUAArhedT684Klm80BL8PUp10pamTyFO34p4EdKVeWajWXIt8UWxo+wQv9nvldA
J163npwniBYk7ZsoNz13RvtRLyL1BOQEhh/xoODAs8SJGm8MRpydLw4QNRN+KTb5
NkSCXuh0cORz0/KN6yugsR4SZ/JnBovm0iIQu47IiZbVG0SbQk5thtF002ejojL4
O5KOSQxW5wfcWNbBo79bw45+S+MPBpEZItPXaXeiFZBBL7agYEmCO50+iKv93xLw
0f1MPA6GuJjM3yLld+ONPFnrC3Mw/CVPBkuxUgVwQedzGUkRJ7zzGVvGbnSmzvyI
YIVXea4u4YG4Lk/d0+XTozfNfrm5kRfvtVckX3ThVMDyKIacJyVLGGKDHV5UvRlh
oUOPUeiwaOyPqY46gyNepuY7tebg10ykeso5TpMqrJVwUfL0Ri4jIKKVSYvvvG7U
9M748VB/an1idysIZ9z1D7gIbs11L+jeq44Hy2AJe0nfVdsJKXy6NjCKrW1vXlKA
X5gP85oPe89iACtRE3TlBbl916dsW67GHP6oUvCpKseliqsevRuUsC8/gukpFOpe
e6V1i8k5z+2doLxHv7by
=b8yC
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 1/4] Create new target "kdump" which uses libkdumpfile: https://github.com/ptesarik/libkdumpfile to access contents of compressed kernel dump.
  2016-01-31 21:45 ` [PATCH 1/4] Create new target "kdump" which uses libkdumpfile: https://github.com/ptesarik/libkdumpfile to access contents of compressed kernel dump Ales Novak
@ 2016-02-04 12:40   ` Pedro Alves
  2016-02-04 12:45     ` Ales Novak
  0 siblings, 1 reply; 31+ messages in thread
From: Pedro Alves @ 2016-02-04 12:40 UTC (permalink / raw)
  To: Ales Novak, gdb-patches

I didn't see this mentioned anywere, but ...

On 01/31/2016 09:44 PM, Ales Novak wrote:
> +++ b/LICENSE
> @@ -0,0 +1,340 @@
> +GNU GENERAL PUBLIC LICENSE
> +                       Version 2, June 1991
> +

... why did you need this?  What is under GPLv2?

GDB is GPLv3+, which makes that a problem.

Thanks,
Pedro Alves

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 1/4] Create new target "kdump" which uses libkdumpfile: https://github.com/ptesarik/libkdumpfile to access contents of compressed kernel dump.
  2016-02-04 12:40   ` Pedro Alves
@ 2016-02-04 12:45     ` Ales Novak
  0 siblings, 0 replies; 31+ messages in thread
From: Ales Novak @ 2016-02-04 12:45 UTC (permalink / raw)
  To: Pedro Alves; +Cc: gdb-patches

Hi,

thanks for noticing. Though I am aware it looks like an Troyan horse, I 
have no clue where did this came from...

On 2016-2-4 13:40, Pedro Alves wrote:

> I didn't see this mentioned anywere, but ...
>
> On 01/31/2016 09:44 PM, Ales Novak wrote:
>> +++ b/LICENSE
>> @@ -0,0 +1,340 @@
>> +GNU GENERAL PUBLIC LICENSE
>> +                       Version 2, June 1991
>> +
>
> ... why did you need this?  What is under GPLv2?
>
> GDB is GPLv3+, which makes that a problem.
>
> Thanks,
> Pedro Alves
>
>

-- 
Ales Novak

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 2/4] Add Jeff Mahoney's py-crash patches.
  2016-02-03 18:31         ` Doug Evans
  2016-02-03 19:29           ` Jeff Mahoney
@ 2016-02-04 17:25           ` Petr Tesarik
  2016-02-04 18:32             ` Matt Rice
  2016-02-04 22:27             ` Doug Evans
  1 sibling, 2 replies; 31+ messages in thread
From: Petr Tesarik @ 2016-02-04 17:25 UTC (permalink / raw)
  To: Doug Evans; +Cc: Jeff Mahoney, Ales Novak, gdb-patches

Hi Doug,

On Wed, 3 Feb 2016 10:30:20 -0800
Doug Evans <dje@google.com> wrote:

> On Wed, Feb 3, 2016 at 9:55 AM, Jeff Mahoney <jeffm@suse.com> wrote:
> >...
> >>> Hi.
> >>
> >> Hi Doug -
> >>
> >>> Part of what this patch is doing is exporting bfd to python.
> >>> E.g., all the SEC_* constants.
> >>
> >>> As a rule we absolutely discourage people from using bfd outside
> >>> of the the binutils+gdb source tree. Either this rule needs to
> >>> change, or I don't think we can allow this patch. I'd be
> >>> interested to hear what others in the community think.
> >>
> >> That's unfortunate.  The Linux kernel uses ELF sections for a
> >> number of purposes.  Most notably is the definition of per-cpu
> >> variables. Without the ELF section, we can't resolve the addresses
> >> for the variables.  So, from our perspective, it's a requirement.
> >>
> >>> For myself, I would much rather export ELF separately (e.g., a
> >>> separate python API one can use independent of any particular
> >>> tool, including gdb), and then have gdb provide the necessary
> >>> glue to use this API. [I can imagine some compromises being
> >>> needed, at least for now; e.g., it'd be cumbersome to read in all
> >>> ELF symbols twice. But fixing that is just an optimization.]
> >>
> >> Ok, that's doable.  As it is, the section code mixes GDB and BFD
> >> pretty heavily.  It shouldn't be too difficult to separate the two
> >> out and push the section stuff into a new BFD python interface and
> >> associate the objfiles with it.
> >
> > And here's what I've come up with.  Does this constitute enough of a
> > separation?  It /should/ cross over into the BFD code in the same way
> > that the GDB code does: As soon as we hit a bfd object or a
> > bfd_section object, we call into bfd's new python API to generate the
> > objects.
> >
> > https://jeffm.io/git/cgit.cgi/gnu/binutils-gdb/log/?h=gdb/python-bfd
> >
> > For the fully-integrated kdump work, use the python-bfd-kdump branch
> > (or SUSE folks, python-bfd-kdump-buildid will pick up the separate
> > debuginfos as we usually expect).
> 
> Separation isn't the issue, unfortunately.
> The issue is that we cannot export bfd to python, period.

Excuse my ignorance, but can you explain a bit more why BFD should not
be used? I'm sure there has been some discussion on that topic; a
pointer in the right direction would be welcome.

TIA,
Petr Tesarik

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 2/4] Add Jeff Mahoney's py-crash patches.
  2016-02-04 17:25           ` Petr Tesarik
@ 2016-02-04 18:32             ` Matt Rice
  2016-02-04 22:27             ` Doug Evans
  1 sibling, 0 replies; 31+ messages in thread
From: Matt Rice @ 2016-02-04 18:32 UTC (permalink / raw)
  To: Petr Tesarik; +Cc: Doug Evans, Jeff Mahoney, Ales Novak, gdb-patches

On Thu, Feb 4, 2016 at 9:25 AM, Petr Tesarik <ptesarik@suse.cz> wrote:
> Hi Doug,
>
> On Wed, 3 Feb 2016 10:30:20 -0800
> Doug Evans <dje@google.com> wrote:
>
>> On Wed, Feb 3, 2016 at 9:55 AM, Jeff Mahoney <jeffm@suse.com> wrote:
>> >...
>> >>> Hi.
>> >>
>> >> Hi Doug -
>> >>
>> >>> Part of what this patch is doing is exporting bfd to python.
>> >>> E.g., all the SEC_* constants.
>> >>
>> >>> As a rule we absolutely discourage people from using bfd outside
>> >>> of the the binutils+gdb source tree. Either this rule needs to
>> >>> change, or I don't think we can allow this patch. I'd be
>> >>> interested to hear what others in the community think.
>> >>
>> >> That's unfortunate.  The Linux kernel uses ELF sections for a
>> >> number of purposes.  Most notably is the definition of per-cpu
>> >> variables. Without the ELF section, we can't resolve the addresses
>> >> for the variables.  So, from our perspective, it's a requirement.
>> >>
>> >>> For myself, I would much rather export ELF separately (e.g., a
>> >>> separate python API one can use independent of any particular
>> >>> tool, including gdb), and then have gdb provide the necessary
>> >>> glue to use this API. [I can imagine some compromises being
>> >>> needed, at least for now; e.g., it'd be cumbersome to read in all
>> >>> ELF symbols twice. But fixing that is just an optimization.]
>> >>
>> >> Ok, that's doable.  As it is, the section code mixes GDB and BFD
>> >> pretty heavily.  It shouldn't be too difficult to separate the two
>> >> out and push the section stuff into a new BFD python interface and
>> >> associate the objfiles with it.
>> >
>> > And here's what I've come up with.  Does this constitute enough of a
>> > separation?  It /should/ cross over into the BFD code in the same way
>> > that the GDB code does: As soon as we hit a bfd object or a
>> > bfd_section object, we call into bfd's new python API to generate the
>> > objects.
>> >
>> > https://jeffm.io/git/cgit.cgi/gnu/binutils-gdb/log/?h=gdb/python-bfd
>> >
>> > For the fully-integrated kdump work, use the python-bfd-kdump branch
>> > (or SUSE folks, python-bfd-kdump-buildid will pick up the separate
>> > debuginfos as we usually expect).
>>
>> Separation isn't the issue, unfortunately.
>> The issue is that we cannot export bfd to python, period.
>
> Excuse my ignorance, but can you explain a bit more why BFD should not
> be used? I'm sure there has been some discussion on that topic; a
> pointer in the right direction would be welcome.

BFD has never provided releases which admit conformance to a stable API/ABI
or the BFD library has always been an unstable API which you include
directly and use locally in your program.

Currently when a change to BFD would break some BFD using program,
that program would be fixed to use the new API.

Exporting BFD from gdb entails a level of certainty about how BFD may
change in the future so we can keep this exported API the same in gdb
as BFD changes underneath it.

the closest thing I could find to this effect is from the BFD:
"BFD is normally built as part of another package.  See the build
instructions for that package"

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 2/4] Add Jeff Mahoney's py-crash patches.
  2016-02-04 17:25           ` Petr Tesarik
  2016-02-04 18:32             ` Matt Rice
@ 2016-02-04 22:27             ` Doug Evans
  1 sibling, 0 replies; 31+ messages in thread
From: Doug Evans @ 2016-02-04 22:27 UTC (permalink / raw)
  To: Petr Tesarik; +Cc: Jeff Mahoney, Ales Novak, gdb-patches

On Thu, Feb 4, 2016 at 9:25 AM, Petr Tesarik <ptesarik@suse.cz> wrote:
>
> Hi Doug,
>
> On Wed, 3 Feb 2016 10:30:20 -0800
> Doug Evans <dje@google.com> wrote:
>
> > On Wed, Feb 3, 2016 at 9:55 AM, Jeff Mahoney <jeffm@suse.com> wrote:
> > >...
> > >>> Hi.
> > >>
> > >> Hi Doug -
> > >>
> > >>> Part of what this patch is doing is exporting bfd to python.
> > >>> E.g., all the SEC_* constants.
> > >>
> > >>> As a rule we absolutely discourage people from using bfd outside
> > >>> of the the binutils+gdb source tree. Either this rule needs to
> > >>> change, or I don't think we can allow this patch. I'd be
> > >>> interested to hear what others in the community think.
> > >>
> > >> That's unfortunate.  The Linux kernel uses ELF sections for a
> > >> number of purposes.  Most notably is the definition of per-cpu
> > >> variables. Without the ELF section, we can't resolve the addresses
> > >> for the variables.  So, from our perspective, it's a requirement.
> > >>
> > >>> For myself, I would much rather export ELF separately (e.g., a
> > >>> separate python API one can use independent of any particular
> > >>> tool, including gdb), and then have gdb provide the necessary
> > >>> glue to use this API. [I can imagine some compromises being
> > >>> needed, at least for now; e.g., it'd be cumbersome to read in all
> > >>> ELF symbols twice. But fixing that is just an optimization.]
> > >>
> > >> Ok, that's doable.  As it is, the section code mixes GDB and BFD
> > >> pretty heavily.  It shouldn't be too difficult to separate the two
> > >> out and push the section stuff into a new BFD python interface and
> > >> associate the objfiles with it.
> > >
> > > And here's what I've come up with.  Does this constitute enough of a
> > > separation?  It /should/ cross over into the BFD code in the same way
> > > that the GDB code does: As soon as we hit a bfd object or a
> > > bfd_section object, we call into bfd's new python API to generate the
> > > objects.
> > >
> > > https://jeffm.io/git/cgit.cgi/gnu/binutils-gdb/log/?h=gdb/python-bfd
> > >
> > > For the fully-integrated kdump work, use the python-bfd-kdump branch
> > > (or SUSE folks, python-bfd-kdump-buildid will pick up the separate
> > > debuginfos as we usually expect).
> >
> > Separation isn't the issue, unfortunately.
> > The issue is that we cannot export bfd to python, period.
>
> Excuse my ignorance, but can you explain a bit more why BFD should not
> be used? I'm sure there has been some discussion on that topic; a
> pointer in the right direction would be welcome.



Hi.
I'm not sure this is written down anywhere, but
the basic answer is that bfd is explicitly not a published API.
The developers reserve the right to rewrite it at will.
[Not that any kind of "rewrite" will ever happen,
but things do get changed.]
Exporting it to python means such changes are
harder, if not impossible, to make.

Which isn't to say that the gnu tools shouldn't
be providing published APIs for such things.
I think they should.
But bfd has a lot of, ummm, history,
and publishing it as a stable API is unlikely to
get buy-in
from anyone. I could be wrong of course.
Maybe someone could start carving off
bits of it to publish, but I would go slow
and get consensus before proceeding too far -
I wouldn't want anyone to end up wasting time on
this.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Enable gdb to open Linux kernel dumps
  2016-02-01 15:01       ` Jeff Mahoney
  2016-02-02  9:12         ` Kieran Bingham
@ 2016-02-10  3:24         ` Jeff Mahoney
  1 sibling, 0 replies; 31+ messages in thread
From: Jeff Mahoney @ 2016-02-10  3:24 UTC (permalink / raw)
  To: Ales Novak, Kieran Bingham; +Cc: gdb-patches

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 2/1/16 10:01 AM, Jeff Mahoney wrote:
> On 2/1/16 9:32 AM, Ales Novak wrote:
>> On 2016-2-1 12:51, Kieran Bingham wrote:
> 
>>>
>>> On 01/02/16 11:27, Kieran Bingham wrote:
>>>> Hi Ales,
>>>>
>>>> I'm just checking out your tree now to try locally.
>>>>
>>>> It sounds like there is a high level of cross over in our work,
>>>> but I believe our work can complement each other's if we work
>>>> together.
> 
>> Yes. Our primary intention is to open kdumps (i.e. dead images of
>> the fallen kernels), but what can be shared between live and dead
>> kernel debugging, should be shared...
> 
>>>> On 31/01/16 21:44, Ales Novak wrote:
>>>>> Following patches are adding basic ability to access Linux
>>>>> kernel dumps using the libkdumpfile library. They're creating
>>>>> new target "kdump", so all one has to do is to provide
>>>>> appropriate debuginfo and then run "target kdump
>>>>> /path/to/vmcore".
>>>>>
>>>>> The tasks of the dumped kernel are mapped to threads in gdb.
>>>>>
>>>>> Except for that, there's a code adding understanding of Linux
>>>>> SLAB memory allocator, which means we can tell for the given
>>>>> address to which SLAB does it belong, or list objects for
>>>>> give SLAB name - and more.
>>>>>
>>>>> Patches are against "gdb-7.10-release" (but will apply
>>>>> elsewhere).
>>>>>
>>>>> Note: registers of task are fetched accordingly - either from
>>>>> the dump metadata (the active tasks) or from their stacks. It
>>>>> should be noted that as this mechanism varies amongst the
>>>>> kernel versions and configurations, my naive implementation
>>>>> currently covers only the dumps I encounter, handling of
>>>>> different kernel versions is to be added.
>>>> In the work that I am doing, I had expected this to be done in
>>>> python for exactly this reason. The kernel version specifics,
>>>> (and architecture specifics) can then live alongside their
>>>> respective trees.
>>>>> In the near future, our plan is to remove the clumsy C-code
>>>>> handling this and reimplement it in Python - only the binding
>>>>> to certain gdb structures (e.g. thread, regcache) has to be
>>>>> added. A colleague of mine is already working on that.
>>>> This sounds exactly like the work I am doing right now. Could
>>>> you pass on my details to your colleague so we can discuss?
>>>
>>> Aha, or is your colleague Andreas Arnez? I'm just about to reply
>>> to his mail over on gbd@ next.
> 
>> No, it's Jeff Mahoney. His current efforts, which include Python
>> binding to threads' regcaches and more, are at:
> 
>> https://jeffm.io/git/cgit.cgi/gnu/binutils-gdb/log/
> 
>> And yes, you're right I've incorrectly removed autorship from some
>> of his older patches (which in fact are not necessary for the
>> current gdb-kdump to work, they are extending the Python binding).
> 
>> And as you've already found, his older patches are at:
> 
>> https://github.com/jeffmahoney/py-crash
> 
> Hi guys -
> 
> Ales gave me the heads up that you were discussing these.  The github
> repo is old and I haven't touched it in a year or so.  The link to my
> git server is the active one, but I should be clear that this is
> currently a WIP from my perspective.  I've been doing my work in the
> rel-7.10.1-kdump branch, which is based on the gdb-7.10.1-release tag,
> plus some SUSE patches to handle build-ids and external debuginfo files.
> 
> This branch is subject to rebasing as I make progess, but there should
> be a stable base underneath it that I can condense and put into a
> separate branch for public consumption.

Hi guys -

I spent a decent amount of time on this in the past week
or so and have something usable to present.  At least in
terms of baseline functionality.  My branches have been
churning quite a bit, much to the annoyance of my colleagues,
I'm sure. :)

Here's the end result:

https://jeffm.io/git/cgit.cgi/gnu/binutils-gdb/log/?h=snapshots/python-working-target-20160209

There is no more kdump.c or py-kdump.c.  All the functionality added to gdb itself should be sufficiently generic, though I expect there may be some discussion points.  The target itself
is implemented as an interface between the gdb python API and the libkdump python API, entirely in Python.  As of this evening, it does depend on this commit:

https://github.com/jeffmahoney/libkdumpfile/commit/9488340227f3d69c893599101d8bdae1106da44b

... on top of the current libkdumpfile master branch.

The interface is such now that my test.py script consists of:
import kdump.target
kdump.target.Target('/var/crash/2015-12-08-15:18/vmcore')

... and I can do:
$ gdb /var/crash/2015-12-08-15\:18/vmlinux-3.16.7-29-desktop
[...]
(gdb) source ../test.py
kdump (<open file '/var/crash/2015-12-08-15:18/vmcore', mode 'r' at 0x7f64c1f32ae0>)
(gdb) thread 409
[Switching to thread 409 (pid 8110)]
#0  0xffffffff8161f172 in context_switch (next=<optimized out>,
    prev=<optimized out>, rq=<optimized out>) at ../kernel/sched/core.c:2334
2334	../kernel/sched/core.c: No such file or directory.
(gdb) bt
#0  0xffffffff8161f172 in context_switch (next=<optimized out>,
    prev=<optimized out>, rq=<optimized out>) at ../kernel/sched/core.c:2334
#1  __schedule () at ../kernel/sched/core.c:2795
#2  0xffffffff8161f62a in schedule () at ../kernel/sched/core.c:2831
#3  0xffffffff8105e9ea in do_wait (wo=0xffff880136a7ff08)
    at ../kernel/exit.c:1506
#4  0xffffffff8105fa87 in SYSC_wait4 (ru=<optimized out>,
    options=<optimized out>, stat_addr=<optimized out>, upid=<optimized out>)
    at ../kernel/exit.c:1615
#5  SyS_wait4 (upid=<optimized out>, stat_addr=140728330276444,
    options=<optimized out>, ru=<optimized out>) at ../kernel/exit.c:1584
#6  <signal handler called>
#7  0x00007fb93274ca5c in ?? ()
#8  0x0000000000000000 in ?? ()
(gdb) python import crash.commands.log
(gdb) pydmesg
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Initializing cgroup subsys cpuacct
[    0.000000] Linux version 3.16.7-29-desktop (geeko@buildhost) (gcc version 4.8.3 20140627 [gcc-4_8-branch revision 212064] (SUSE Linux) ) #1 SMP PREEMPT Fri Oct 23 00:46:04 UTC 2015 (6be6a97)
[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-3.16.7-29-desktop root=UUID=886331b0-64cb-4f49-9db6-aa03562a8df0 eth0=dhcp console=tty0 console=ttyS0,115200 resume=/dev/disk/by-id/ata-GB0500EAFYL_WCASYE758932-part5 splash=silent quiet showopts crashkernel=1024M-:512M
[...]

It's a start.  Now that I have what I think should be a mostly stable base, I'm going to turn my efforts into wrangling my existing python crash projects into something that can work with this a bit better.  Obviously dmesg already works.  ps shouldn't be far behind.

- -Jeff

- -- 
Jeff Mahoney
SUSE Labs
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
Comment: GPGTools - http://gpgtools.org

iQIcBAEBAgAGBQJWuq1WAAoJEB57S2MheeWyfOsQAKZVllQ378T5untDOMuTLm4h
8tfiuq4+toDcNBwXjWEds8AwnAphSqV4Q/U61Z18adcNDTl2ajRvQHRGhH65DJHA
Nu532HSPl/4LzkwrLUs9KdKYX0ROK05jHWbqTvG8Bf98S/eWrTtRnhbGrxv7O9wK
dBPVoRwEsRlXfWakNUB4x8BiT7dtX3Sdx/buqz6yCVLEsCXs5M4keYWLzn8bzbmS
I/2M4XTfiZQOfImcCqWL7N7uds8EBZCIOmFbEFZ9hVXrbWKsakqvAJRofIyuNq9N
6gjTRjVgxt3Y/fTf96ol0tPJC/J7GIBv5qCfYX3Y58/jEu9Zm9oC2GN+r8mMMvuJ
lEklDn+7hV9wErh61stUOtr2qqIaZE/phH74dVj4S3+8HVZdP/BAvl03sJj71+ju
XDJVTqC+6+TYBEPpiGXJjdQ8LKvZ0aqY1KC+DskIjnLdCLutayFnn8kzVHQ7PAZZ
L/mPHZLGLaHmvllk35txLTYeVywkK4/JWn42EFmM9xggVKUt7rpEsKIXSe51fp5J
7kzZII+D3c/n2+0gQEqaenjeuZqFQMz30Ke7qPcNWXAHPN1hOd96TTQ++C4ePSH4
scgQqx9MKxfB1l8dM3Bi+nVY7dZ/z2gyU93ZPgcdw+n/UufcfpoXmuWESSskPwsx
V5hTGusS1rqTqTyy1p5W
=dp6N
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2016-02-10  3:24 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-31 21:45 Enable gdb to open Linux kernel dumps Ales Novak
2016-01-31 21:45 ` [PATCH 4/4] Minor cleanups Ales Novak
2016-01-31 21:45 ` [PATCH 2/4] Add Jeff Mahoney's py-crash patches Ales Novak
2016-02-01 12:35   ` Kieran Bingham
2016-02-01 22:23   ` Doug Evans
2016-02-02  2:56     ` Jeff Mahoney
2016-02-02  8:25       ` Kieran Bingham
2016-02-03 17:55       ` Jeff Mahoney
2016-02-03 18:31         ` Doug Evans
2016-02-03 19:29           ` Jeff Mahoney
2016-02-04 17:25           ` Petr Tesarik
2016-02-04 18:32             ` Matt Rice
2016-02-04 22:27             ` Doug Evans
2016-01-31 21:45 ` [PATCH 3/4] Add SLAB allocator understanding Ales Novak
2016-02-01 13:21   ` Kieran Bingham
2016-02-01 22:30     ` Doug Evans
2016-02-02  2:05       ` Ales Novak
2016-02-02  7:22         ` Jan Kiszka
2016-02-02 13:22           ` Petr Tesarik
2016-02-02 14:42             ` Jeff Mahoney
2016-02-02  8:11       ` Kieran Bingham
2016-02-02 10:04     ` Vlastimil Babka
2016-01-31 21:45 ` [PATCH 1/4] Create new target "kdump" which uses libkdumpfile: https://github.com/ptesarik/libkdumpfile to access contents of compressed kernel dump Ales Novak
2016-02-04 12:40   ` Pedro Alves
2016-02-04 12:45     ` Ales Novak
2016-02-01 11:27 ` Enable gdb to open Linux kernel dumps Kieran Bingham
2016-02-01 11:51   ` Kieran Bingham
2016-02-01 14:32     ` Ales Novak
2016-02-01 15:01       ` Jeff Mahoney
2016-02-02  9:12         ` Kieran Bingham
2016-02-10  3:24         ` Jeff Mahoney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).