public inbox for gdb-prs@sourceware.org
help / color / mirror / Atom feed
From: "andrew.burgess at embecosm dot com" <sourceware-bugzilla@sourceware.org>
To: gdb-prs@sourceware.org
Subject: [Bug gdb/26819] RISC-V: internal-error: int finish_step_over(execution_control_state*): Assertion
Date: Thu, 03 Jun 2021 20:41:30 +0000	[thread overview]
Message-ID: <bug-26819-4717-obgvMkXFw9@http.sourceware.org/bugzilla/> (raw)
In-Reply-To: <bug-26819-4717@http.sourceware.org/bugzilla/>

https://sourceware.org/bugzilla/show_bug.cgi?id=26819

--- Comment #38 from Andrew Burgess <andrew.burgess at embecosm dot com> ---
I took a look at this issue and I can reproduce the failure.

First, an interesting aside: I originally tried to reproduce this issue on a
GDB built with --enable-targets=all, and couldn't reproduce the failure.  But,
when I built a GDB with --target=riscv-elf I was able to reproduce the issue
just fine.

The problem turns out to be that by default the all-targets GDB would default
to osabi GNU/Linux, while the riscv-elf GDB does not include Linux support, so
defaults to osabi 'none'.  If I use the all-targets GDB and explicitly do 'set
osabi none' then I can reproduce the failure with all-targets GDB.

The difference is that GNU/Linux RISC-V doesn't support single stepping, so,
when using that osabi we will make use of software single-step.  The bare-metal
'none' osabi assumes that single step support is available.

So, why is this test failing when using single stepping?

It is my belief that the problem here is a bug in the multi-core support of
openocd.

My first clue is the following sequence of vCont and stop replies sent between
GDB and openocd:

  Sending packet: $vCont?#49
  Packet received: vCont;c;C;s;S
  Sending packet: $vCont;c#a8
  Packet received: T05thread:2;
  Sending packet: $vCont;s:2#24
  Packet received: T05
  Sending packet: $vCont;s:2;c#c2
  Packet received: T05
  Sending packet: $vCont;s:1;c#c1
  Packet received: T05thread:2;

Notice that after the first 'vCont;c' we get back 'T05thread:2' clearly
indicating which thread stopped.

Next GDB sends 'vCont;s:2', so steps only the single thread '2', now we get
back 'T05'.  This is annoying (no thread-id), but the original patches for this
issue addressed this, as only one thread was set running GDB correctly
"guesses" the thread and carries on.

After that GDB sends 'vCont;s:2;c', now we're single stepping thread 2, but
allowing thread 1 to continue.  Again the reply comes back 'T05'.  This time
GDB guesses thread 1 as the stopped thread, and things start to go off the
rails.

Notice however, that in the next packet GDB sends 'vCont;s:1;c', which is kind
of the reverse, step thread 1, continue thread 2, but now we get a reply
'T05thread:2' which includes a thread-id.  Weird!

The summary of the above then is that sometimes openocd does not return a
thread-id even when multiple threads are running.

I looked a little into openocd, specifically I looked at the function
gdb_signal_reply in server/gdb_server.c.  This function is passed a 'struct
target *target'.  Whether we send a thread-id back or not depends on whether
the 'target->rtos' field is set or not.

Now, if I debug this function I see two different target pointers passed in at
different times, one target represents "riscv.cpu0", this target has its rtos
field set, the other target represents "riscv.cpu1", this target does not have
its rtos field set.

Now, I don't know the openocd internals, but what seems to happen is that
sometimes the target stops, and gdb_signal_reply is called with "riscv.cpu0"
target, but because target->rtos is set the stop is reported against
target->rtos->current_thread, which can be thread 2.

Then, sometimes, the target stops with "riscv.cpu1", in this case openocd just
makes no attempt to add a thread-id as target->rtos is NULL.

So, where does this leave us?

The above explains why GDB starts getting confused, but doesn't fully explain
why we eventually hit the assert.  I'm still looking into the details of that,
but wanted to record what I knew so far.

GDB _could_ possibly be slightly smarter when it guesses, as in if we send
'vCont;s:2;c', then maybe we should guess that thread-2 is the likely thread to
have stopped, as most of the time thread-2 single stepping will complete before
thread-1 hits anything interesting.... most of the time.

But this really feels like working around a broken target.

-- 
You are receiving this mail because:
You are on the CC list for the bug.

  parent reply	other threads:[~2021-06-03 20:41 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-30 16:11 [Bug gdb/26819] New: " jmatyas at codasip dot com
2020-10-30 16:13 ` [Bug gdb/26819] " jmatyas at codasip dot com
2020-11-08  9:07 ` [Bug gdb/26819] RISC-V: " jmatyas at codasip dot com
2020-11-08 10:07 ` andrew.burgess at embecosm dot com
2020-11-08 11:10 ` jmatyas at codasip dot com
2020-11-08 11:13 ` jmatyas at codasip dot com
2020-11-09  9:51 ` andrew.burgess at embecosm dot com
2021-01-06  6:37 ` jmatyas at codasip dot com
2021-01-06  9:35 ` andrew.burgess at embecosm dot com
2021-01-08 11:03 ` jmatyas at codasip dot com
2021-01-08 16:09 ` simark at simark dot ca
2021-01-14  1:27 ` cvs-commit at gcc dot gnu.org
2021-01-14  1:29 ` simark at simark dot ca
2021-01-14  9:11 ` andrew.burgess at embecosm dot com
2021-01-14  9:41 ` jmatyas at codasip dot com
2021-01-16 11:00 ` jmatyas at codasip dot com
2021-01-16 11:03 ` jmatyas at codasip dot com
2021-01-16 11:04 ` jmatyas at codasip dot com
2021-01-17  3:36 ` simark at simark dot ca
2021-01-17 22:37 ` jmatyas at codasip dot com
2021-01-17 22:38 ` jmatyas at codasip dot com
2021-01-17 22:39 ` jmatyas at codasip dot com
2021-01-18  5:15 ` simark at simark dot ca
2021-01-18 11:01 ` jmatyas at codasip dot com
2021-01-18 15:23 ` sebastian.huber@embedded-brains.de
2021-01-27 12:35 ` jmatyas at codasip dot com
2021-01-27 16:00 ` simark at simark dot ca
2021-02-07 21:20 ` jmatyas at codasip dot com
2021-02-08  2:00 ` simon.marchi at polymtl dot ca
2021-02-08  2:02 ` simon.marchi at polymtl dot ca
2021-02-20 20:51 ` simark at simark dot ca
2021-02-24  9:17 ` jmatyas at codasip dot com
2021-02-25  7:41 ` jmatyas at codasip dot com
2021-02-25 20:50 ` simark at simark dot ca
2021-03-04 23:23 ` jmatyas at codasip dot com
2021-03-04 23:24 ` jmatyas at codasip dot com
2021-03-04 23:25 ` jmatyas at codasip dot com
2021-03-04 23:25 ` jmatyas at codasip dot com
2021-03-04 23:27 ` simark at simark dot ca
2021-03-04 23:46 ` jmatyas at codasip dot com
2021-03-04 23:49 ` jmatyas at codasip dot com
2021-06-03 20:41 ` andrew.burgess at embecosm dot com [this message]
2021-06-03 21:19 ` andrew.burgess at embecosm dot com
2021-06-03 21:38 ` andrew.burgess at embecosm dot com
2021-06-06 14:28 ` brobecker at gnat dot com
2021-06-07  5:28 ` jmatyas at codasip dot com
2021-06-07  8:30 ` andrew.burgess at embecosm dot com
2021-08-06  1:41 ` lennordocdoc0921 at gmail dot com
2022-04-09 15:07 ` [Bug tdep/26819] " tromey at sourceware dot org
2022-11-29 19:38 ` tromey at sourceware dot org

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-26819-4717-obgvMkXFw9@http.sourceware.org/bugzilla/ \
    --to=sourceware-bugzilla@sourceware.org \
    --cc=gdb-prs@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).