public inbox for gdb-patches@sourceware.org
 help / color / mirror / Atom feed
From: Rainer Orth <ro@CeBiTec.Uni-Bielefeld.DE>
To: Pedro Alves <palves@redhat.com>
Cc: gdb-patches@sourceware.org
Subject: Re: Unbreaking gdb on Solaris post-multitarget [PR 25939]
Date: Fri, 19 Jun 2020 14:36:07 +0200	[thread overview]
Message-ID: <ydd366rb5c8.fsf@CeBiTec.Uni-Bielefeld.DE> (raw)
In-Reply-To: <f86dba2f-21db-93aa-60c2-79b6230d085d@redhat.com> (Pedro Alves's message of "Thu, 18 Jun 2020 16:51:49 +0100")

Hi Pedro,

> On 6/18/20 3:55 PM, Pedro Alves via Gdb-patches wrote:
>> On 6/17/20 3:45 PM, Rainer Orth wrote:
>>> [Thread debugging using libthread_db enabled]
>>> [New Thread 1 (LWP 1)]
>>> Breakpoint 1 at 0x401036: file hello.c, line 6.
>>> bottom-gdb.gdb:3: Error in sourced command file:
>>> procfs: couldn't find pid 0 in procinfo list.
>> 
>> I see what this is.  This is procfs_target::wait relying on
>> inferior_ptid.  Since the multi-target series, inferior_ptid
>> is null_ptid before we call target_wait:
>> 
>> static ptid_t
>> do_target_wait_1 (inferior *inf, ptid_t ptid,
>> 		  target_waitstatus *status, int options)
>> {
>>   ptid_t event_ptid;
>>   struct thread_info *tp;
>> 
>>   /* We know that we are looking for an event in the target of inferior
>>      INF, but we don't know which thread the event might come from.  As
>>      such we want to make sure that INFERIOR_PTID is reset so that none of
>>      the wait code relies on it - doing so is always a mistake.  */
>>   switch_to_inferior_no_thread (inf);
>> 
>> 
>> I'm working on a patch.

I'd identified procfs_target::wait as the cause of the
find_procinfo_or_die call with pid = 0 by adding a sleep call in the
latter and attaching gdb to look where I am, but hadn't gotten further yet.

> Here it is.  This works for me on a Solaris 11.3 (virtual and slow...) machine.

I'd meant to suggest trying gcc211 in the GCC compile farm, but the
machine was quite busy when I looked/tried, lacked some vital software
(bison, patch, makeinfo, make only as gmake, gcc 5.5.0 only which may or
may not work).  I've built those myself and managed a gdb build, too,
but the experience wasn't too pretty: not what I'd want to work with as
a developer on a foreign platform meant to support gcc/gdb/binutils...

> Debugging GDB itself works for me, and I've checked that the gdb.base/break.exp
> testcase passes cleanly, at least.
>
> Your push_target fix is still necessary, FAOD.

Should I push it as is (with an appropriate description, of course) or
would the code change need a comment, too?

> Could you give it a try?

I did so now, both on amd64-pc-solaris2.11 (Solaris 11.4), and
sparcv9-sun-solaris2.11 (Solaris 11.3, gcc211 above).  

gdb basically works again, but compared to the pre-multi-target results
I have still a considerable number of regressions:

before:

# of expected passes            62928
# of unexpected failures        1841
# of unexpected successes       4
# of expected failures          49
# of unknown successes          6

now:

# of expected passes            63768
# of unexpected failures        2411
# of expected failures          52
# of unknown successes          1

Of course there's months of gdb development between the two, but e.g. I
see

    268 /vol/src/gnu/gdb/hg/master/local/gdb/thread.c:336: internal-error: thread_info::thread_info(inferior*, ptid_t): Assertion `inf_ != NULL' failed.
    119 /vol/src/gnu/gdb/hg/master/local/gdb/thread.c:86: internal-error: thread_info* inferior_thread(): Assertion `tp' failed.
     88 /vol/src/gnu/gdb/hg/master/local/gdb/inline-frame.c:384: internal-error: void skip_inline_frames(thread_info*, bpstat): Assertion `find_inline_frame_stacore LWP 1 In:     ' failed.

compared to previous

    487 /vol/src/gnu/gdb/hg/master/reghunt/gdb/thread.c:93: internal-error: thread_info* inferior_thread(): Assertion `tp' failed.
     72 /vol/src/gnu/gdb/hg/master/reghunt/gdb/inline-frame.c:367: internal-error: void skip_inline_frames(thread_info*, bpstat): Assertion `find_inline_frame_state (thread) == NULL' failed.

Some of those are definitively regressions, although it's difficult to
say with the flaky nature of several tests on Solaris.

Whatever the case, it looks like I have months of work ahead ;-)

Thanks a lot for fixing this.

	Rainer

-- 
-----------------------------------------------------------------------------
Rainer Orth, Center for Biotechnology, Bielefeld University

  reply	other threads:[~2020-06-19 12:36 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-16 14:21 Rainer Orth
2020-06-16 19:16 ` Pedro Alves
2020-06-17 14:45   ` Rainer Orth
2020-06-18 14:55     ` Pedro Alves
2020-06-18 15:51       ` Pedro Alves
2020-06-19 12:36         ` Rainer Orth [this message]
2020-06-19 13:55           ` Pedro Alves
2020-06-21 16:37             ` [COMMITTED PATCH][PR gdb/25939] Move push_target call earlier in procfs.c Rainer Orth
2020-06-22 10:19               ` Pedro Alves
2020-06-17 15:43   ` Unbreaking gdb on Solaris post-multitarget [PR 25939] Tom Tromey
2020-06-17 17:07     ` Rainer Orth

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ydd366rb5c8.fsf@CeBiTec.Uni-Bielefeld.DE \
    --to=ro@cebitec.uni-bielefeld.de \
    --cc=gdb-patches@sourceware.org \
    --cc=palves@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).