public inbox for gdb-prs@sourceware.org
help / color / mirror / Atom feed
* [Bug python/22748] crash from custom unwinder
       [not found] <bug-22748-4717@http.sourceware.org/bugzilla/>
@ 2020-07-06 17:40 ` cvs-commit at gcc dot gnu.org
  2021-06-07 14:45 ` tromey at sourceware dot org
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 5+ messages in thread
From: cvs-commit at gcc dot gnu.org @ 2020-07-06 17:40 UTC (permalink / raw)
  To: gdb-prs

https://sourceware.org/bugzilla/show_bug.cgi?id=22748

--- Comment #10 from cvs-commit at gcc dot gnu.org <cvs-commit at gcc dot gnu.org> ---
The master branch has been updated by Andrew Burgess <aburgess@sourceware.org>:

https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=9fc501fdfe5dc82b5e5388cde4ac2ab70ed69d75

commit 9fc501fdfe5dc82b5e5388cde4ac2ab70ed69d75
Author: Andrew Burgess <andrew.burgess@embecosm.com>
Date:   Mon Jun 8 11:36:13 2020 +0100

    gdb: Python unwinders, inline frames, and tail-call frames

    This started with me running into the bug described in python/22748,
    in summary, if the frame sniffing code accessed any registers within
    an inline frame then GDB would crash with this error:

      gdb/frame.c:579: internal-error: frame_id get_frame_id(frame_info*):
Assertion `fi->level == 0' failed.

    The problem is that, when in the Python unwinder I write this:

      pending_frame.read_register ("register-name")

    This is translated internally into a call to `value_of_register',
    which in turn becomes a call to `value_of_register_lazy'.

    Usually this isn't a problem, `value_of_register_lazy' requires the
    next frame (more inner) to have a valid frame_id, which will be the
    case (if we're sniffing frame #1, then frame #0 will have had its
    frame-id figured out).

    Unfortunately if frame #0 is inline within frame #1, then the frame-id
    for frame #0 can't be computed until we have the frame-id for #1.  As
    a result we can't create a lazy register for frame #1 when frame #0 is
    inline.

    Initially I proposed a solution inline with that proposed in bugzilla,
    changing value_of_register to avoid creating a lazy register value.
    However, when this was discussed on the mailing list I got this reply:

      https://sourceware.org/pipermail/gdb-patches/2020-June/169633.html

    Which led me to look at these two patches:

      [1] https://sourceware.org/pipermail/gdb-patches/2020-April/167612.html
      [2] https://sourceware.org/pipermail/gdb-patches/2020-April/167930.html

    When I considered patches [1] and [2] I saw that all of the issues
    being addressed here were related, and that there was a single
    solution that could address all of these issues.

    First I wrote the new test gdb.opt/inline-frame-tailcall.exp, which
    shows that [1] and [2] regress the inline tail-call unwinder, the
    reason for this is that these two patches replace a call to
    gdbarch_unwind_pc with a call to get_frame_register, however, this is
    not correct.  The previous call to gdbarch_unwind_pc takes THIS_FRAME
    and returns the $pc value in the previous frame.  In contrast
    get_frame_register takes THIS_FRAME and returns the value of the $pc
    in THIS_FRAME; these calls are not equivalent.

    The reason these patches appear (or do) fix the regressions listed in
    [1] is that the tail call sniffer depends on identifying the address
    of a caller and a callee, GDB then looks for a tail-call sequence that
    takes us from the caller address to the callee, if such a series is
    found then tail-call frames are added.

    The bug that was being hit, and which was address in patch [1] is that
    in order to find the address of the caller, GDB ended up creating a
    lazy register value for an inline frame with to frame-id.  The
    solution in patch [1] is to instead take the address of the callee and
    treat this as the address of the caller.  Getting the address of the
    callee works, but we then end up looking for a tail-call series from
    the callee to the callee, which obviously doesn't return any sane
    results, so we don't insert any tail call frames.

    The original patch [1] did cause some breakage, so patch [2] undid
    patch [1] in all cases except those where we had an inline frame with
    no frame-id.  It just so happens that there were no tests that fitted
    this description _and_ which required tail-call frames to be
    successfully spotted, as a result patch [2] appeared to work.

    The new test inline-frame-tailcall.exp, exposes the flaw in patch [2].

    This commit undoes patch [1] and [2], and replaces them with a new
    solution, which is also different to the solution proposed in the
    python/22748 bug report.

    In this solution I propose that we introduce some special case logic
    to value_of_register_lazy.  To understand what this logic is we must
    first look at how inline frames unwind registers, this is very simple,
    they do this:

      static struct value *
      inline_frame_prev_register (struct frame_info *this_frame,
                                  void **this_cache, int regnum)
      {
        return get_frame_register_value (this_frame, regnum);
      }

    And remember:

      struct value *
      get_frame_register_value (struct frame_info *frame, int regnum)
      {
        return frame_unwind_register_value (frame->next, regnum);
      }

    So in all cases, unwinding a register in an inline frame just asks the
    next frame to unwind the register, this makes sense, as an inline
    frame doesn't really exist, when we unwind a register in an inline
    frame, we're really just asking the next frame for the value of the
    register in the previous, non-inline frame.

    So, if we assume that we only get into the missing frame-id situation
    when we try to unwind a register from an inline frame during the frame
    sniffing process, then we can change value_of_register_lazy to not
    create lazy register values for an inline frame.

    Imagine this stack setup, where #1 is inline within #2.

      #3 -> #2 -> #1 -> #0
            \______/
             inline

    Now when trying to figure out the frame-id for #1, we need to compute
    the frame-id for #2.  If the frame sniffer for #2 causes a lazy
    register read in #2, either due to a Python Unwinder, or for the
    tail-call sniffer, then we call value_of_register_lazy passing in
    frame #2.

    In value_of_register_lazy, we grab the next frame, which is #1, and we
    used to then ask for the frame-id of #1, which was not computed, and
    this was our bug.

    Now, I propose we spot that #1 is an inline frame, and so lookup the
    next frame of #1, which is #0.  As #0 is not inline it will have a
    valid frame-id, and so we create a lazy register value using #0 as the
    next-frame-id.  This will give us the exact same result we had
    previously (thanks to the code we inspected above).

    Encoding into value_of_register_lazy the knowledge that reading an
    inline frame register will always just forward to the next frame
    feels.... not ideal, but this seems like the cleanest solution to this
    recursive frame-id computation/sniffing issue that appears to crop
    up.

    The following two commits are fully reverted with this commit, these
    correspond to patches [1] and [2] respectively:

      commit 5939967b355ba6a940887d19847b7893a4506067
      Date:   Tue Apr 14 17:26:22 2020 -0300

          Fix inline frame unwinding breakage

      commit 991a3e2e9944a4b3a27bd989ac03c18285bd545d
      Date:   Sat Apr 25 00:32:44 2020 -0300

          Fix remaining inline/tailcall unwinding breakage for x86_64

    gdb/ChangeLog:

            PR python/22748
            * dwarf2/frame-tailcall.c (dwarf2_tailcall_sniffer_first): Remove
            special handling for inline frames.
            * findvar.c (value_of_register_lazy): Skip inline frames when
            creating lazy register values.
            * frame.c (frame_id_computed_p): Delete definition.
            * frame.h (frame_id_computed_p): Delete declaration.

    gdb/testsuite/ChangeLog:

            PR python/22748
            * gdb.opt/inline-frame-tailcall.c: New file.
            * gdb.opt/inline-frame-tailcall.exp: New file.
            * gdb.python/py-unwind-inline.c: New file.
            * gdb.python/py-unwind-inline.exp: New file.
            * gdb.python/py-unwind-inline.py: New file.

-- 
You are receiving this mail because:
You are on the CC list for the bug.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Bug python/22748] crash from custom unwinder
       [not found] <bug-22748-4717@http.sourceware.org/bugzilla/>
  2020-07-06 17:40 ` [Bug python/22748] crash from custom unwinder cvs-commit at gcc dot gnu.org
@ 2021-06-07 14:45 ` tromey at sourceware dot org
  2022-05-11 15:46 ` ludvig.janiuk at gmail dot com
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 5+ messages in thread
From: tromey at sourceware dot org @ 2021-06-07 14:45 UTC (permalink / raw)
  To: gdb-prs

https://sourceware.org/bugzilla/show_bug.cgi?id=22748

--- Comment #11 from Tom Tromey <tromey at sourceware dot org> ---
Andrew, is this bug fixed?
It seems to be, to me, but I wanted to double-check.

-- 
You are receiving this mail because:
You are on the CC list for the bug.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Bug python/22748] crash from custom unwinder
       [not found] <bug-22748-4717@http.sourceware.org/bugzilla/>
  2020-07-06 17:40 ` [Bug python/22748] crash from custom unwinder cvs-commit at gcc dot gnu.org
  2021-06-07 14:45 ` tromey at sourceware dot org
@ 2022-05-11 15:46 ` ludvig.janiuk at gmail dot com
  2022-06-10 22:25 ` tromey at sourceware dot org
  2022-06-15 15:04 ` aburgess at redhat dot com
  4 siblings, 0 replies; 5+ messages in thread
From: ludvig.janiuk at gmail dot com @ 2022-05-11 15:46 UTC (permalink / raw)
  To: gdb-prs

https://sourceware.org/bugzilla/show_bug.cgi?id=22748

Ludvig Janiuk <ludvig.janiuk at gmail dot com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |ludvig.janiuk at gmail dot com

--- Comment #12 from Ludvig Janiuk <ludvig.janiuk at gmail dot com> ---
I'd also like to know if it has been fixed. Experienced this issue on 9.2 on
Ubuntu.

-- 
You are receiving this mail because:
You are on the CC list for the bug.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Bug python/22748] crash from custom unwinder
       [not found] <bug-22748-4717@http.sourceware.org/bugzilla/>
                   ` (2 preceding siblings ...)
  2022-05-11 15:46 ` ludvig.janiuk at gmail dot com
@ 2022-06-10 22:25 ` tromey at sourceware dot org
  2022-06-15 15:04 ` aburgess at redhat dot com
  4 siblings, 0 replies; 5+ messages in thread
From: tromey at sourceware dot org @ 2022-06-10 22:25 UTC (permalink / raw)
  To: gdb-prs

https://sourceware.org/bugzilla/show_bug.cgi?id=22748

Tom Tromey <tromey at sourceware dot org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |aburgess at redhat dot com

--- Comment #13 from Tom Tromey <tromey at sourceware dot org> ---
Aha, Andrew wasn't CC'd on the bug last time around.

-- 
You are receiving this mail because:
You are on the CC list for the bug.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Bug python/22748] crash from custom unwinder
       [not found] <bug-22748-4717@http.sourceware.org/bugzilla/>
                   ` (3 preceding siblings ...)
  2022-06-10 22:25 ` tromey at sourceware dot org
@ 2022-06-15 15:04 ` aburgess at redhat dot com
  4 siblings, 0 replies; 5+ messages in thread
From: aburgess at redhat dot com @ 2022-06-15 15:04 UTC (permalink / raw)
  To: gdb-prs

https://sourceware.org/bugzilla/show_bug.cgi?id=22748

Andrew Burgess <aburgess at redhat dot com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|NEW                         |RESOLVED
         Resolution|---                         |FIXED

--- Comment #14 from Andrew Burgess <aburgess at redhat dot com> ---
This issue should now be resolved.  If anyone can still reproduce this issue,
please feel free to reopen this bug.

-- 
You are receiving this mail because:
You are on the CC list for the bug.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-06-15 15:04 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <bug-22748-4717@http.sourceware.org/bugzilla/>
2020-07-06 17:40 ` [Bug python/22748] crash from custom unwinder cvs-commit at gcc dot gnu.org
2021-06-07 14:45 ` tromey at sourceware dot org
2022-05-11 15:46 ` ludvig.janiuk at gmail dot com
2022-06-10 22:25 ` tromey at sourceware dot org
2022-06-15 15:04 ` aburgess at redhat dot com

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).