From: Pedro Alves <pedro@codesourcery.com>
To: Hui Zhu <teawater@gmail.com>
Cc: gdb-patches@sourceware.org
Subject: Re: [RFA]corelow.c: Add tid to add_to_thread_list
Date: Fri, 06 Aug 2010 17:18:00 -0000 [thread overview]
Message-ID: <201008061817.49329.pedro@codesourcery.com> (raw)
In-Reply-To: <AANLkTimAdhn8iR3LfNtKEezn1hErOZDXhU==8XUi-cmF@mail.gmail.com>
On Friday 06 August 2010 17:47:53, Hui Zhu wrote:
> The root cause about this issue is the idle thread's pid is 0.
I'm still interested in answers to the questions I wrote before.
Reading the thread again, I understand this a kernel dump core.
Am I correct? I've never loaded one in gdb, hence my questions.
From your early objdump output,
Sections:
Idx Name Size VMA LMA File off Algn
0 note0 00000a48 0000000000000000 0000000000000000 00000238 2**0
CONTENTS, READONLY
1 .reg/0 000000d8 0000000000000000 0000000000000000 000002bc 2**2
CONTENTS
2 .reg 000000d8 0000000000000000 0000000000000000 000002bc 2**2
CONTENTS
3 .reg/2719 000000d8 0000000000000000 0000000000000000 00000420 2**2
CONTENTS
4 .reg/0 000000d8 0000000000000000 0000000000000000 00000584 2**2
CONTENTS
5 .reg/0 000000d8 0000000000000000 0000000000000000 000006e8 2**2
there's always one thread per core, never more. Is that correct? Is there
any indication in the core notes that would allow us to identify this core
as a kernel core, not an application core? IMO, since we're debugging at
the kernel level, we'd instead use that info to teach bfd info building the
.reg sections as, say:
0 note0 00000a48 0000000000000000 0000000000000000 00000238 2**0
CONTENTS, READONLY
1 .reg/1 000000d8 0000000000000000 0000000000000000 000002bc 2**2
CONTENTS
2 .reg 000000d8 0000000000000000 0000000000000000 000002bc 2**2
CONTENTS
3 .reg/2 000000d8 0000000000000000 0000000000000000 00000420 2**2
CONTENTS
4 .reg/3 000000d8 0000000000000000 0000000000000000 00000584 2**2
CONTENTS
5 .reg/4 000000d8 0000000000000000 0000000000000000 000006e8 2**2
that is, identify the cores, not the process the core happened to be
running.
> If
> more than one cpu is in idle and each cpu will be a thread in core
> file, we got a core file that have some thread ptid is same.
> For now, gdb cannot handle it:
> struct thread_info *
> add_thread_silent (ptid_t ptid)
> {
If this function hit an internal error in this scenario, then
it has a bug. I think Maciej wrote a patch to fix it in our
internal tree. I'll try to look for it. Note that with this
fixed, gdb would still discard all idle threads but one,
and, when accessing the registers of the one that stays, we'd
be accessing the wrong .reg section.
--
Pedro Alves
next prev parent reply other threads:[~2010-08-06 17:18 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-08-03 8:49 Hui Zhu
2010-08-05 18:44 ` Tom Tromey
2010-08-06 2:56 ` Hui Zhu
2010-08-06 9:57 ` Pedro Alves
2010-08-06 16:48 ` Hui Zhu
2010-08-06 17:18 ` Pedro Alves [this message]
2010-08-06 20:06 ` Pedro Alves
2010-08-06 20:50 ` Maciej W. Rozycki
2010-08-09 2:28 ` Hui Zhu
2010-08-09 14:48 ` Pedro Alves
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=201008061817.49329.pedro@codesourcery.com \
--to=pedro@codesourcery.com \
--cc=gdb-patches@sourceware.org \
--cc=teawater@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).