public inbox for gdb@sourceware.org
 help / color / mirror / Atom feed
From: Mark Kettenis <mark.kettenis@xs4all.nl>
To: jan.kratochvil@redhat.com
Cc: Jon.Zhou@jdsu.com, gdb@sourceware.org
Subject: Re: support biarch gcore?
Date: Mon, 05 Jul 2010 11:55:00 -0000	[thread overview]
Message-ID: <201007051154.o65BskRG028816@glazunov.sibelius.xs4all.nl> (raw)
In-Reply-To: <20100705071235.GA26137@host0.dyn.jankratochvil.net> (message	from Jan Kratochvil on Mon, 5 Jul 2010 09:12:35 +0200)

> Date: Mon, 5 Jul 2010 09:12:35 +0200
> From: Jan Kratochvil <jan.kratochvil@redhat.com>
> 
> On Mon, 05 Jul 2010 06:52:37 +0200, Jon Zhou wrote:
> > Regarding this case, does the current release solve it ? I just tried the
> > patch but looks it doesn't work
> 
> Issue is tracked at:
> 	amd64 gdb generates corrupted 32bit core file
> 	http://sourceware.org/bugzilla/show_bug.cgi?id=11467
> 
> There was a patchset by H.J.Lu but it did not make it in FSF GDB:
> 	PATCH: PR corefiles/11467: amd64 gdb generates corrupted 32bit core file
> 	http://sourceware.org/ml/gdb-patches/2010-04/msg00315.html
> with the last mail of this thread:
> 	http://sourceware.org/ml/gdb-patches/2010-04/msg00427.html
> 	> Please stop sending diffs until you understand how the code is
> 	> supposed to work.
> 
> The Fedora patch
> 	http://cvs.fedoraproject.org/viewvc/rpms/gdb/devel/gdb-6.5-gcore-i386-on-amd64.patch?content-type=text%2Fplain&view=co
> 
> will be rebased or replaced by H.J.Lu's one for gdb-7.2-pre by 2010-07-27.

The proper way to fix the issue is to add proper cross-core support to
BFD for i386 and amd64 like was done for powerpc/powerpc64 a couple of
years ago.  See the thread starting at:

http://sourceware.org/ml/binutils/2010-04/msg00225.html

Perhaps I need to resubmit that diff now that things have settled down
a bit.

With that diff in, there still is a GDB issue that needs to be
resolved.  The core generation code in
linux-nat.c:linux_nat_do_thread_registers() allocates a buffer of type
gdb_gregset_t to store the registers.  Since its size differs between
i386 and amd64 the size passed to _reget_from_core_section() is always
the size of the amd64 gdb_gregset_t.  Since the current code checks
for an exact match, it fails to return a regset, and things go
downhill from there.

Fixing the code in linux-nat.c is a bit nasty:

* The definition of the 32-bit version of gdb_gregset_t isn't readily
  available on 64-bit systems.

* The code is used on all Linux platforms and only a few of them are
  bi-arch.

An alternative solution would be to make _regset_from_core_section() a
little bit more forgiving.  The diff below works since the size of the
amd64 gdb_gregset_t is larger than the i386-version.


2010-07-05  Mark Kettenis  <kettenis@gnu.org>

        * i386-tdep.c (i386_supply_gregset, i386_collect_gregset)
        (i386_regset_from_core_section): Relax check for size of .reg
        section.

Index: i386-tdep.c
===================================================================
RCS file: /cvs/src/src/gdb/i386-tdep.c,v
retrieving revision 1.316
diff -u -p -r1.316 i386-tdep.c
--- i386-tdep.c	22 Jun 2010 02:15:45 -0000	1.316
+++ i386-tdep.c	5 Jul 2010 11:50:55 -0000
@@ -2775,7 +2775,7 @@ i386_supply_gregset (const struct regset
   const gdb_byte *regs = gregs;
   int i;
 
-  gdb_assert (len == tdep->sizeof_gregset);
+  gdb_assert (len >= tdep->sizeof_gregset);
 
   for (i = 0; i < tdep->gregset_num_regs; i++)
     {
@@ -2799,7 +2799,7 @@ i386_collect_gregset (const struct regse
   gdb_byte *regs = gregs;
   int i;
 
-  gdb_assert (len == tdep->sizeof_gregset);
+  gdb_assert (len >= tdep->sizeof_gregset);
 
   for (i = 0; i < tdep->gregset_num_regs; i++)
     {
@@ -2880,7 +2880,7 @@ i386_regset_from_core_section (struct gd
 {
   struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
 
-  if (strcmp (sect_name, ".reg") == 0 && sect_size == tdep->sizeof_gregset)
+  if (strcmp (sect_name, ".reg") == 0 && sect_size >= tdep->sizeof_gregset)
     {
       if (tdep->gregset == NULL)
 	tdep->gregset = regset_alloc (gdbarch, i386_supply_gregset,








  reply	other threads:[~2010-07-05 11:55 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-07-03 10:48 Examining copied stack contents Martin Schröder
2010-07-05  4:54 ` support biarch gcore? Jon Zhou
2010-07-05  7:12   ` Jan Kratochvil
2010-07-05 11:55     ` Mark Kettenis [this message]
2010-07-06 20:48       ` Ulrich Weigand
2010-07-06 21:29         ` Mark Kettenis
2010-07-07 12:30           ` Ulrich Weigand
2010-07-08  2:35             ` Jon Zhou
2010-07-08 11:17               ` Ulrich Weigand
2010-07-08  4:47             ` H.J. Lu
2010-07-08  5:05               ` H.J. Lu
2010-07-08 11:15                 ` Ulrich Weigand
2010-07-08 13:52                   ` H.J. Lu
2010-07-21 22:45                     ` Joseph S. Myers
2010-07-05 18:50 ` Examining copied stack contents Petr Hluzín
2010-07-05 20:18   ` Martin Schröder
2010-07-05 20:27     ` Joel Brobecker
2010-07-07 17:29   ` Martin Schröder

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201007051154.o65BskRG028816@glazunov.sibelius.xs4all.nl \
    --to=mark.kettenis@xs4all.nl \
    --cc=Jon.Zhou@jdsu.com \
    --cc=gdb@sourceware.org \
    --cc=jan.kratochvil@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).