public inbox for gdb-patches@sourceware.org
 help / color / mirror / Atom feed
From: Kevin Buettner <kevinb@redhat.com>
To: gdb-patches@sourceware.org
Subject: Re: Fix for prologue processing on PowerPC
Date: Thu, 05 Oct 2017 02:01:00 -0000	[thread overview]
Message-ID: <20171004190148.0a8c9879@pinnacle.lan> (raw)
In-Reply-To: <f90d189e-7ec5-34f9-c776-8af42a3c07a6@rt-rk.com>

On Fri, 22 Sep 2017 14:11:51 +0200
Nikola Prica <nikola.prica@rt-rk.com> wrote:

>    One of conditions in skip_prologue() is never visited because it
>    expects non shifted `lr_reg`. That condition is supposed to set PC
>    offset. When body of this condition is visited PC offset is set and
>    there will be no need to look for it in next frames nor to use frame
>    unwind directives.
> 
> gdb/ChangeLog:
>         *rs6000-tdep.c (skip_prologue): Remove shifting for lr_reg
>          and assign shifted lr_reg to fdata->lr_register when lr_reg is
>          set.
> ---
>  gdb/rs6000-tdep.c | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/gdb/rs6000-tdep.c b/gdb/rs6000-tdep.c
> index 95b2ca7..7f64901 100644
> --- a/gdb/rs6000-tdep.c
> +++ b/gdb/rs6000-tdep.c
> @@ -1652,11 +1652,14 @@ skip_prologue (struct gdbarch *gdbarch, CORE_ADDR pc, CORE_ADDR lim_pc,
>  
>  	     remember just the first one, but skip over additional
>  	     ones.  */
> -	  if (lr_reg == -1)
> -	    lr_reg = (op & 0x03e00000) >> 21;
> -          if (lr_reg == 0)
> -            r0_contains_arg = 0;
> -	  continue;
> +    if (lr_reg == -1)
> +      {
> +        lr_reg = (op & 0x03e00000);
> +        fdata->lr_register = lr_reg >> 21;
> +      }
> +    if (lr_reg == 0)
> +      r0_contains_arg = 0;
> +    continue;
>  	}
>        else if ((op & 0xfc1fffff) == 0x7c000026)
>  	{			/* mfcr Rx */
> @@ -2178,9 +2181,6 @@ skip_prologue (struct gdbarch *gdbarch, CORE_ADDR pc, CORE_ADDR lim_pc,
>      }
>  #endif /* 0 */
>  
> -  if (pc == lim_pc && lr_reg >= 0)
> -    fdata->lr_register = lr_reg;
> -
>    fdata->offset = -fdata->offset;
>    return last_prologue_pc;
>  }
> -- 
> 2.7.4
> 

Hi Nikola,

The current version of the code causes fdata->lr_register to be set
only when lr_reg has been previously set ( >= 0) and when pc == lim_pc.

I.e. it won't be set when the loop scanning the prologue is exited early.
Just looking over that loop, there are a number of break statements which
might cause an early loop exit.

My concern about your patch is that by setting fdata->lr_register
within the loop, fdata->lr_register could end up having a value which
we don't want it to have for one of those early loop exits when pc != lim_pc.

That said, I have not done an especially deep analysis of this matter.
If you think that fdata->offset should be set for all of those other
conditions too, then...  Well, I'm willing to consider this proposition.
But I'd like to see some analysis of why this is so.

Otherwise, it seems to me that the patch below (which I have not
tested) might fix the bug that you found, but not perturb the logic
regarding pc != lim_pc ?  If it looks okay to you, please try it out
on your test case (and regression test it using the testsuite too). 
If it's okay, then go ahead and commit it with a suitably adjusted
ChangeLog entry.

Thanks,

Kevin

diff --git a/gdb/rs6000-tdep.c b/gdb/rs6000-tdep.c
index 95b2ca7..86c0915 100644
--- a/gdb/rs6000-tdep.c
+++ b/gdb/rs6000-tdep.c
@@ -1653,7 +1653,7 @@ skip_prologue (struct gdbarch *gdbarch, CORE_ADDR pc, CORE_ADDR lim_pc,
 	     remember just the first one, but skip over additional
 	     ones.  */
 	  if (lr_reg == -1)
-	    lr_reg = (op & 0x03e00000) >> 21;
+	    lr_reg = op & 0x03e00000;
           if (lr_reg == 0)
             r0_contains_arg = 0;
 	  continue;
@@ -2179,7 +2179,7 @@ skip_prologue (struct gdbarch *gdbarch, CORE_ADDR pc, CORE_ADDR lim_pc,
 #endif /* 0 */
 
   if (pc == lim_pc && lr_reg >= 0)
-    fdata->lr_register = lr_reg;
+    fdata->lr_register = lr_reg >> 21;
 
   fdata->offset = -fdata->offset;
   return last_prologue_pc;

  reply	other threads:[~2017-10-05  2:01 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-22 12:11 Nikola Prica
2017-10-05  2:01 ` Kevin Buettner [this message]
2017-10-26 10:02 ` [PING]Fix " Nikola Prica
2017-11-08 16:58   ` Kevin Buettner
2017-11-09 18:15     ` [PING][PATCH] Fix " Nikola Prica
2017-12-01 19:37       ` pedromfc
2017-12-19 15:57         ` Nikola Prica
2017-12-29 18:05           ` Pedro Franco de Carvalho
2018-01-09 11:22             ` Nikola Prica
2018-01-10 17:26               ` Pedro Franco de Carvalho
2018-01-11 15:12                 ` Nikola Prica
2018-01-19  7:49                   ` Nikola Prica
2018-01-19 19:01                     ` Pedro Franco de Carvalho
2018-01-19 19:08                     ` Pedro Franco de Carvalho
2018-01-27 14:32                       ` Nikola Prica
2018-01-31 18:04                         ` Pedro Franco de Carvalho
2018-01-31 18:28                           ` Ulrich Weigand
2018-02-01 13:09                             ` Nikola Prica
2018-01-02 10:29           ` Yao Qi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171004190148.0a8c9879@pinnacle.lan \
    --to=kevinb@redhat.com \
    --cc=gdb-patches@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).