public inbox for gdb-patches@sourceware.org
 help / color / mirror / Atom feed
From: Pedro Alves <pedro@palves.net>
To: Simon Marchi <simark@simark.ca>, gdb-patches@sourceware.org
Subject: Re: [PATCH 1/3] Fix crash if connection drops in scoped_restore_current_thread's ctor, part 1
Date: Thu, 9 Jul 2020 11:51:02 +0100	[thread overview]
Message-ID: <8cde52bc-66d0-14e4-1af7-a770590670ae@palves.net> (raw)
In-Reply-To: <261491cd-7887-f99d-a73b-58167f6d4ca6@simark.ca>

On 7/9/20 4:17 AM, Simon Marchi wrote:
> On 2020-07-08 7:31 p.m., Pedro Alves wrote:
>> Running the testsuite against an Asan-enabled build of GDB makes
>> gdb.base/multi-target.exp expose this bug.
>>
>> scoped_restore_current_thread's ctor calls get_frame_id to record the
>> selected frame's ID to restore later.  If the frame ID hasn't been
>> computed yet, it will be computed on the spot, and that will usually
>> require accessing the target's memory and registers, which requires
>> remote accesses.  If the remote connection closes while we're
>> computing the frame ID, the remote target exits its inferiors,
>> unpushes itself, and throws a TARGET_CLOSE_ERROR error.
>>
>> If that happens, GDB can currently crash, here:
>>
>>> ==18555==ERROR: AddressSanitizer: heap-use-after-free on address 0x621004670aa8 at pc 0x0000007ab125 bp 0x7ffdecaecd20 sp 0x7ffdecaecd10
>>> READ of size 4 at 0x621004670aa8 thread T0
>>>     #0 0x7ab124 in dwarf2_frame_this_id src/binutils-gdb/gdb/dwarf2/frame.c:1228
>>>     #1 0x983ec5 in compute_frame_id src/binutils-gdb/gdb/frame.c:550
>>>     #2 0x9841ee in get_frame_id(frame_info*) src/binutils-gdb/gdb/frame.c:582
>>>     #3 0x1093faa in scoped_restore_current_thread::scoped_restore_current_thread() src/binutils-gdb/gdb/thread.c:1462
>>>     #4 0xaee5ba in fetch_inferior_event(void*) src/binutils-gdb/gdb/infrun.c:3968
>>>     #5 0xaa990b in inferior_event_handler(inferior_event_type, void*) src/binutils-gdb/gdb/inf-loop.c:43
>>>     #6 0xea61b6 in remote_async_serial_handler src/binutils-gdb/gdb/remote.c:14161
>>>     #7 0xefca8a in run_async_handler_and_reschedule src/binutils-gdb/gdb/ser-base.c:137
>>>     #8 0xefcd23 in fd_event src/binutils-gdb/gdb/ser-base.c:188
>>>     #9 0x15a7416 in handle_file_event src/binutils-gdb/gdbsupport/event-loop.cc:548
>>>     #10 0x15a7c36 in gdb_wait_for_event src/binutils-gdb/gdbsupport/event-loop.cc:673
>>>     #11 0x15a5dbb in gdb_do_one_event() src/binutils-gdb/gdbsupport/event-loop.cc:215
>>>     #12 0xbfe62d in start_event_loop src/binutils-gdb/gdb/main.c:356
>>>     #13 0xbfe935 in captured_command_loop src/binutils-gdb/gdb/main.c:416
>>>     #14 0xc01d39 in captured_main src/binutils-gdb/gdb/main.c:1253
>>>     #15 0xc01dc9 in gdb_main(captured_main_args*) src/binutils-gdb/gdb/main.c:1268
>>>     #16 0x414ddd in main src/binutils-gdb/gdb/gdb.c:32
>>>     #17 0x7f590110b82f in __libc_start_main ../csu/libc-start.c:291
>>>     #18 0x414bd8 in _start (build/binutils-gdb/gdb/gdb+0x414bd8)
>>
>> What happens is that above, we're in dwarf2_frame_this_id, just after
>> the dwarf2_frame_cache call.  The "cache" variable that the
>> dwarf2_frame_cache function returned is already stale.  It's been
>> released here, from within the dwarf2_frame_cache:
>>
>> (top-gdb) bt
>> #0  reinit_frame_cache () at src/gdb/frame.c:1855
>> #1  0x00000000014ff7b0 in switch_to_no_thread () at src/gdb/thread.c:1301
>> #2  0x0000000000f66d3e in switch_to_inferior_no_thread (inf=0x615000338180) at src/gdb/inferior.c:626
>> #3  0x00000000012f3826 in remote_unpush_target (target=0x6170000c5900) at src/gdb/remote.c:5521
>> #4  0x00000000013097e0 in remote_target::readchar (this=0x6170000c5900, timeout=2) at src/gdb/remote.c:9137
>> #5  0x000000000130be4d in remote_target::getpkt_or_notif_sane_1 (this=0x6170000c5900, buf=0x6170000c5918, forever=0, expecting_notif=0, is_notif=0x0) at src/gdb/remote.c:9683
>> #6  0x000000000130c8ab in remote_target::getpkt_sane (this=0x6170000c5900, buf=0x6170000c5918, forever=0) at src/gdb/remote.c:9790
>> #7  0x000000000130bc0d in remote_target::getpkt (this=0x6170000c5900, buf=0x6170000c5918, forever=0) at src/gdb/remote.c:9623
>> #8  0x000000000130838e in remote_target::remote_read_bytes_1 (this=0x6170000c5900, memaddr=0x7fffffffcdc0, myaddr=0x6080000ad3bc "", len_units=64, unit_size=1, xfered_len_units=0x7fff6a29b9a0) at src/gdb/remote.c:8860
>> #9  0x0000000001308bd2 in remote_target::remote_read_bytes (this=0x6170000c5900, memaddr=0x7fffffffcdc0, myaddr=0x6080000ad3bc "", len=64, unit_size=1, xfered_len=0x7fff6a29b9a0) at src/gdb/remote.c:8987
>> #10 0x0000000001311ed1 in remote_target::xfer_partial (this=0x6170000c5900, object=TARGET_OBJECT_MEMORY, annex=0x0, readbuf=0x6080000ad3bc "", writebuf=0x0, offset=140737488342464, len=64, xfered_len=0x7fff6a29b9a0) at src/gdb/remote.c:10988
>> #11 0x00000000014ba969 in raw_memory_xfer_partial (ops=0x6170000c5900, readbuf=0x6080000ad3bc "", writebuf=0x0, memaddr=140737488342464, len=64, xfered_len=0x7fff6a29b9a0) at src/gdb/target.c:918
>> #12 0x00000000014bb720 in target_xfer_partial (ops=0x6170000c5900, object=TARGET_OBJECT_RAW_MEMORY, annex=0x0, readbuf=0x6080000ad3bc "", writebuf=0x0, offset=140737488342464, len=64, xfered_len=0x7fff6a29b9a0) at src/gdb/target.c:1148
>> #13 0x00000000014bc3b5 in target_read_partial (ops=0x6170000c5900, object=TARGET_OBJECT_RAW_MEMORY, annex=0x0, buf=0x6080000ad3bc "", offset=140737488342464, len=64, xfered_len=0x7fff6a29b9a0) at src/gdb/target.c:1380
>> #14 0x00000000014bc593 in target_read (ops=0x6170000c5900, object=TARGET_OBJECT_RAW_MEMORY, annex=0x0, buf=0x6080000ad3bc "", offset=140737488342464, len=64) at src/gdb/target.c:1419
>> #15 0x00000000014bbd4d in target_read_raw_memory (memaddr=0x7fffffffcdc0, myaddr=0x6080000ad3bc "", len=64) at src/gdb/target.c:1252
>> #16 0x0000000000bf27df in dcache_read_line (dcache=0x6060001eddc0, db=0x6080000ad3a0) at src/gdb/dcache.c:336
>> #17 0x0000000000bf2b72 in dcache_peek_byte (dcache=0x6060001eddc0, addr=0x7fffffffcdd8, ptr=0x6020001231b0 "") at src/gdb/dcache.c:403
>> #18 0x0000000000bf3103 in dcache_read_memory_partial (ops=0x6170000c5900, dcache=0x6060001eddc0, memaddr=0x7fffffffcdd8, myaddr=0x6020001231b0 "", len=8, xfered_len=0x7fff6a29bf20) at src/gdb/dcache.c:484
>> #19 0x00000000014bafe9 in memory_xfer_partial_1 (ops=0x6170000c5900, object=TARGET_OBJECT_STACK_MEMORY, readbuf=0x6020001231b0 "", writebuf=0x0, memaddr=140737488342488, len=8, xfered_len=0x7fff6a29bf20) at src/gdb/target.c:1034
>> #20 0x00000000014bb212 in memory_xfer_partial (ops=0x6170000c5900, object=TARGET_OBJECT_STACK_MEMORY, readbuf=0x6020001231b0 "", writebuf=0x0, memaddr=140737488342488, len=8, xfered_len=0x7fff6a29bf20) at src/gdb/target.c:1076
>> #21 0x00000000014bb6b3 in target_xfer_partial (ops=0x6170000c5900, object=TARGET_OBJECT_STACK_MEMORY, annex=0x0, readbuf=0x6020001231b0 "", writebuf=0x0, offset=140737488342488, len=8, xfered_len=0x7fff6a29bf20) at src/gdb/target.c:1133
>> #22 0x000000000164564d in read_value_memory (val=0x60f000029440, bit_offset=0, stack=1, memaddr=0x7fffffffcdd8, buffer=0x6020001231b0 "", length=8) at src/gdb/valops.c:956
>> #23 0x0000000001680fff in value_fetch_lazy_memory (val=0x60f000029440) at src/gdb/value.c:3764
>> #24 0x0000000001681efd in value_fetch_lazy (val=0x60f000029440) at src/gdb/value.c:3910
>> #25 0x0000000001676143 in value_optimized_out (value=0x60f000029440) at src/gdb/value.c:1411
>> #26 0x0000000000e0fcb8 in frame_register_unwind (next_frame=0x6210066bfde0, regnum=16, optimizedp=0x7fff6a29c200, unavailablep=0x7fff6a29c240, lvalp=0x7fff6a29c2c0, addrp=0x7fff6a29c300, realnump=0x7fff6a29c280, bufferp=0x7fff6a29c3a0 "@\304)j\377\177") at src/gdb/frame.c:1144
>> #27 0x0000000000e10418 in frame_unwind_register (next_frame=0x6210066bfde0, regnum=16, buf=0x7fff6a29c3a0 "@\304)j\377\177") at src/gdb/frame.c:1196
>> #28 0x0000000000f00431 in i386_unwind_pc (gdbarch=0x6210043d0110, next_frame=0x6210066bfde0) at src/gdb/i386-tdep.c:1969
>> #29 0x0000000000e39724 in gdbarch_unwind_pc (gdbarch=0x6210043d0110, next_frame=0x6210066bfde0) at src/gdb/gdbarch.c:3056
>> #30 0x0000000000c2ea90 in dwarf2_tailcall_sniffer_first (this_frame=0x6210066bfde0, tailcall_cachep=0x6210066bfee0, entry_cfa_sp_offsetp=0x0) at src/gdb/dwarf2/frame-tailcall.c:423
>> #31 0x0000000000c36bdb in dwarf2_frame_cache (this_frame=0x6210066bfde0, this_cache=0x6210066bfdf8) at src/gdb/dwarf2/frame.c:1198
>> #32 0x0000000000c36eb3 in dwarf2_frame_this_id (this_frame=0x6210066bfde0, this_cache=0x6210066bfdf8, this_id=0x6210066bfe40) at src/gdb/dwarf2/frame.c:1226
>>
>> Note that remote_target::readchar in frame #3 throws
>> TARGET_CLOSE_ERROR after that remote_unpush_target in frame #3
>> returns.
>>
>> The problem is that that TARGET_CLOSE_ERROR is swallowed by
> 
> `that that`
> 

That wasn't a typo, compare with:

 The problem is that this TARGET_CLOSE_ERROR
 The problem is that those TARGET_CLOSE_ERRORs
 The problem is that these TARGET_CLOSE_ERRORs

https://english.stackexchange.com/questions/3418/how-do-you-handle-that-that-the-double-that-problem

I'll say "that the" instead to avoid confusion.

> Otherwise, LGTM.
> 
> Simon

  reply	other threads:[~2020-07-09 10:51 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-08 23:31 [PATCH 0/3] Fix crash if connection drops in scoped_restore_current_thread's ctor Pedro Alves
2020-07-08 23:31 ` [PATCH 1/3] Fix crash if connection drops in scoped_restore_current_thread's ctor, part 1 Pedro Alves
2020-07-09  3:17   ` Simon Marchi
2020-07-09 10:51     ` Pedro Alves [this message]
2020-07-09 14:13       ` Simon Marchi
2020-07-08 23:31 ` [PATCH 2/3] Fix crash if connection drops in scoped_restore_current_thread's ctor, part 2 Pedro Alves
2020-07-09  3:31   ` Simon Marchi
2020-07-09 11:12     ` Pedro Alves
2020-07-09 14:16       ` Simon Marchi
2020-07-09 17:23         ` Pedro Alves
2020-07-09 17:28           ` Simon Marchi
2020-07-08 23:31 ` [PATCH 3/3] Make scoped_restore_current_thread's cdtors exception free (RFC) Pedro Alves
2020-07-09  3:49   ` Simon Marchi
2020-07-09 11:56     ` Pedro Alves
2020-07-09 12:09       ` Pedro Alves
2020-07-09 15:40       ` Simon Marchi
2020-07-09 22:22         ` Pedro Alves
2020-07-10  2:55           ` Simon Marchi
2020-10-30  1:13             ` Pedro Alves
2020-10-30  1:37               ` [pushed] Move lookup_selected_frame to frame.c Pedro Alves
2020-10-30  7:44               ` [PATCH 3/3] Make scoped_restore_current_thread's cdtors exception free (RFC) Aktemur, Tankut Baris
2020-10-30 11:32                 ` Pedro Alves
2020-10-31 14:35                   ` [PATCH] Fix frame cycle detection (Re: [PATCH 3/3] Make scoped_restore_current_thread's cdtors exception free (RFC)) Pedro Alves
2020-11-09 14:05                     ` Aktemur, Tankut Baris
2020-11-16 13:48                       ` Tom de Vries
2020-11-16 14:57                         ` Pedro Alves
2020-07-10 23:02 ` [PATCH 0/3] Fix crash if connection drops in scoped_restore_current_thread's ctor Pedro Alves
2020-07-22 19:37   ` Simon Marchi
2020-07-22 20:37     ` Pedro Alves
2020-07-22 20:47       ` Simon Marchi
2020-07-23 15:28         ` [pushed] Don't touch frame_info objects if frame cache was reinitialized (was: Re: [PATCH 0/3] Fix crash if connection drops in scoped_restore_current_thread's ctor) Pedro Alves

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8cde52bc-66d0-14e4-1af7-a770590670ae@palves.net \
    --to=pedro@palves.net \
    --cc=gdb-patches@sourceware.org \
    --cc=simark@simark.ca \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).