public inbox for gdb-patches@sourceware.org
 help / color / mirror / Atom feed
From: Luis Machado <luis.machado@arm.com>
To: Torbjorn SVENSSON <torbjorn.svensson@foss.st.com>,
	Tomas Vanek <vanekt@volny.cz>,
	gdb-patches@sourceware.org
Cc: Yvan Roux <yvan.roux@foss.st.com>
Subject: Re: [PATCH v2 3/4] gdb: dwarf2 generic implementation for caching function data
Date: Thu, 8 Dec 2022 01:11:00 +0000	[thread overview]
Message-ID: <83c964b3-a618-433d-c87f-69c1c0a34476@arm.com> (raw)
In-Reply-To: <66d62a68-fc39-5288-1ee2-63c03b85ba47@foss.st.com>

On 11/30/22 10:16, Torbjorn SVENSSON wrote:
> Hi,
> 
> On 2022-11-29 17:24, Tomas Vanek wrote:
>> Hi Torbjorn,
>>
>> On 29/11/2022 16:19, Torbjorn SVENSSON wrote:
>>> Hi,
>>>
>>> I've had a long discussion with Luis on IRC regarding the points mentioned here, but I'll reply to the list now in order to get more eyes on the topic.
>>>
>>>
>>> On 2022-11-21 22:16, Luis Machado wrote:
>>>> Hi,
>>>>
>>>> On 11/18/22 15:52, Torbjörn SVENSSON wrote:
>>>>> When there is no dwarf2 data for a register, a function can be called
>>>>> to provide the value of this register.  In some situations, it might
>>>>> not be trivial to determine the value to return and it would cause a
>>>>> performance bottleneck to do the computation each time.
>>>>>
>>>>> This patch allows the called function to have a "cache" object that it
>>>>> can use to store some metadata between calls to reduce the performance
>>>>> impact of the complex logic.
>>>>>
>>>>> The cache object is unique for each function and frame, so if there are
>>>>> more than one function pointer stored in the dwarf2_frame_cache->reg
>>>>> array, then the appropriate pointer will be supplied (the type is not
>>>>> known by the dwarf2 implementation).
>>>>>
>>>>> dwarf2_frame_get_fn_data can be used to retrieve the function unique
>>>>> cache object.
>>>>> dwarf2_frame_allocate_fn_data can be used to allocate and retrieve the
>>>>> function unqiue cache object.
>>>>
>>>> unqiue -> unique
>>>>
>>>>>
>>>>> Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@foss.st.com>
>>>>> Signed-off-by: Yvan Roux <yvan.roux@foss.st.com>
>>>>> ---
>>>>>   gdb/dwarf2/frame.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++
>>>>>   gdb/dwarf2/frame.h | 20 +++++++++++++++++--
>>>>>   2 files changed, 66 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/gdb/dwarf2/frame.c b/gdb/dwarf2/frame.c
>>>>> index 3f884abe1d5..bff3b706e7e 100644
>>>>> --- a/gdb/dwarf2/frame.c
>>>>> +++ b/gdb/dwarf2/frame.c
>>>>> @@ -831,6 +831,14 @@ dwarf2_fetch_cfa_info (struct gdbarch *gdbarch, CORE_ADDR pc,
>>>>>   }
>>>>>   \f
>>>>> +struct dwarf2_frame_fn_data
>>>>> +{
>>>>> +  struct value *(*fn) (frame_info_ptr this_frame, void **this_cache,
>>>>> +               int regnum);
>>>>> +  void *data;
>>>>> +  struct dwarf2_frame_fn_data* next;
>>>>> +};
>>>>> +
>>>>
>>>> I'm wondering if we really need to have a function pointer here. Isn't the cache supposed to be frame-wide and not
>>>> function-specific?
>>>>
>>>> If we don't need it, the cache just becomes an opaque data pointer.
>>>>
>>>>>   struct dwarf2_frame_cache
>>>>>   {
>>>>>     /* DWARF Call Frame Address.  */
>>>>> @@ -862,6 +870,8 @@ struct dwarf2_frame_cache
>>>>>        dwarf2_tailcall_frame_unwind unwinder so this field does not apply for
>>>>>        them.  */
>>>>>     void *tailcall_cache;
>>>>> +
>>>>> +  struct dwarf2_frame_fn_data *fn_data;
>>>>>   };
>>>>>   static struct dwarf2_frame_cache *
>>>>> @@ -1221,6 +1231,44 @@ dwarf2_frame_prev_register (frame_info_ptr this_frame, void **this_cache,
>>>>>       }
>>>>>   }
>>>>> +void *dwarf2_frame_get_fn_data (frame_info_ptr this_frame, void **this_cache,
>>>>> +                fn_prev_register fn)
>>>>> +{
>>>>> +  struct dwarf2_frame_fn_data *fn_data = nullptr;
>>>>> +  struct dwarf2_frame_cache *cache
>>>>> +    = dwarf2_frame_cache (this_frame, this_cache);
>>>>> +
>>>>> +  /* Find the object for the function.  */
>>>>> +  for (fn_data = cache->fn_data; fn_data; fn_data = fn_data->next)
>>>>> +    if (fn_data->fn == fn)
>>>>> +      return fn_data->data;
>>>>> +
>>>>> +  return nullptr;
>>>>> +}
>>>>> +
>>>>> +void *dwarf2_frame_allocate_fn_data (frame_info_ptr this_frame,
>>>>> +                     void **this_cache,
>>>>> +                     fn_prev_register fn, unsigned long size)
>>>>> +{
>>>>> +  struct dwarf2_frame_fn_data *fn_data = nullptr;
>>>>> +  struct dwarf2_frame_cache *cache
>>>>> +    = dwarf2_frame_cache (this_frame, this_cache);
>>>>> +
>>>>> +  /* First try to find an existing object.  */
>>>>> +  void *data = dwarf2_frame_get_fn_data (this_frame, this_cache, fn);
>>>>> +  if (data)
>>>>> +    return data;
>>>>> +
>>>>> +  /* No object found, lets create a new instance.  */
>>>>> +  fn_data = FRAME_OBSTACK_ZALLOC (struct dwarf2_frame_fn_data);
>>>>> +  fn_data->fn = fn;
>>>>> +  fn_data->data = frame_obstack_zalloc (size);
>>>>> +  fn_data->next = cache->fn_data;
>>>>> +  cache->fn_data = fn_data;
>>>>> +
>>>>> +  return fn_data->data;
>>>>> +}
>>>>
>>>> And if we only have a data pointer, we can return a reference to it through the argument, and then DWARF can cache it.
>>>>
>>>> We could even have a destructor/cleanup that can get called once the frames are destroyed.
>>>
>>> I don't think we can do that without introducing a lot more changes to the common code. My changes are designed in a way that would only have an impact on arm (as they are the only users for the functionality right now) and not for every target out there that GDB supports. If going for a simpler solution, it would mean that every target needs to be re-tested in order to get the confirmation that the implementation would not break some other target.
>>>
>>>
>>>>
>>>>> +
>>>>>   /* Proxy for tailcall_frame_dealloc_cache for bottom frame of a virtual tail
>>>>>      call frames chain.  */
>>>>> diff --git a/gdb/dwarf2/frame.h b/gdb/dwarf2/frame.h
>>>>> index 06c8a10c178..444afd9f8eb 100644
>>>>> --- a/gdb/dwarf2/frame.h
>>>>> +++ b/gdb/dwarf2/frame.h
>>>>> @@ -66,6 +66,9 @@ enum dwarf2_frame_reg_rule
>>>>>   /* Register state.  */
>>>>> +typedef struct value *(*fn_prev_register) (frame_info_ptr this_frame,
>>>>> +                       void **this_cache, int regnum);
>>>>> +
>>>>>   struct dwarf2_frame_state_reg
>>>>>   {
>>>>>     /* Each register save state can be described in terms of a CFA slot,
>>>>> @@ -78,8 +81,7 @@ struct dwarf2_frame_state_reg
>>>>>         const gdb_byte *start;
>>>>>         ULONGEST len;
>>>>>       } exp;
>>>>> -    struct value *(*fn) (frame_info_ptr this_frame, void **this_cache,
>>>>> -             int regnum);
>>>>> +    fn_prev_register fn;
>>>>>     } loc;
>>>>>     enum dwarf2_frame_reg_rule how;
>>>>>   };
>>>>> @@ -262,4 +264,18 @@ extern int dwarf2_fetch_cfa_info (struct gdbarch *gdbarch, CORE_ADDR pc,
>>>>>                     const gdb_byte **cfa_start_out,
>>>>>                     const gdb_byte **cfa_end_out);
>>>>> +
>>>>> +/* Allocate a new instance of the function unique data.  */
>>>>> +
>>>>> +extern void *dwarf2_frame_allocate_fn_data (frame_info_ptr this_frame,
>>>>> +                        void **this_cache,
>>>>> +                        fn_prev_register fn,
>>>>> +                        unsigned long size);
>>>>> +
>>>>> +/* Retrieve the function unique data for this frame.  */
>>>>> +
>>>>> +extern void *dwarf2_frame_get_fn_data (frame_info_ptr this_frame,
>>>>> +                       void **this_cache,
>>>>> +                       fn_prev_register fn);
>>>>> +
>>>>>   #endif /* dwarf2-frame.h */
>>>>
>>>> As we've discussed before, I think the cache idea is nice if we have to deal with targets with multiple CFA's (in our case, we have either 4 SP's or 2 SP's, plus aliases).
>>>>
>>>> DWARF doesn't seem to support this at the moment, and the function HOW for DWARF is not smart enough to remember a previously-fetched value. So it seems we have room
>>>> for some improvement, unless there is enough reason elsewhere about why we shouldn't have a cache.
>>>
>>>
>>> This patch does not provide a cache or anything, it just provides a way for the callback function to save some additional data between calls for the same frame.
>>> The code above is generic in the way that it has one data object per function and frame. The reason for this implementation is that it's rather easy to ensure that the data object is okay for the function that uses it without any inter-dependencies with some other function that might be called for some other register on the same frame. You could even consider having a shared function to be a callback function. In the case of a shared function, that would mean that the object would be large and public and then it would simply make more sense to make the dwarf2 object public and extend it instead.
>>> My approach ensures that each callback function has its own data and the data structure is "private" to the function. It's possible to share the struct for the data object between 2 functions, but it's not possible to share the instance of the struct between 2 functions.
>>
>> Sorry, the per-function-pointers looks like an overkill to me.
>> Maybe I'm just an old school programmer and don't like associative arrays...
>> - frame unwinders use a generic pointer and ensuring the proper type cast is fully in responsibility of the implementation.
>> - we need just to replicate the similar functionality for architecture dependent handling of dwarf2 frames
>> - functions assigned to a dwarf2 frame by how = DWARF2_FRAME_REG_FN are never isolated functions from different parts
>> of code: a gdbarch can set only one initializer by dwarf2_frame_set_init_reg() and it sets all functions
>> - if we ever have more than one function assigned in one dwarf2 frame, seems me likely that all functions would prefer a single cache over isolated ones
> 
> 
> Based on the points above, can please answer the below questions?

I'll try to clarify some of them from my personal perspective.

> 
> a) Why is there a function pointer per register if it's always going to be a single one? It would have made more sense to simply have a single bool telling if the function should be called or not if your suggestion would make sense.

That's correct. Each register (dwarf register column) gets to define a HOW, so you can have different functions for different registers. It may not be useful to have multiple different functions for this purpose though.

> 
> b) If you want a shared cache object, then why is all the cache types private to the compile unit? You can't have something shared across 2 compile units that are just defined in the .c file.

There is probably some historic reason for doing it that way. My guess is that we want to isolate the dwarf unwinding machinery as much as possible from the rest of the code. Having other code influencing the dwarf
unwinding logic may not be desirable. But then we have ways to do it, like the callback HOW.

With that said, I agree with Tomas that we should have a single data pointer/cache per frame. If we need to touch a bit more generic code, that is fine as long as it is done in a flexible future-proof way.

We already have a good mechanism for storing register values, and that's the trad_frame_saved_reg structure that we use for most of the prologue unwinder/analyzer code. That structure allocates things on the
frame obstack, so memory release is done automatically when the frame data gets flushed.

A getter function is needed so we can return the trad_frame_saved_reg array contained in the opaque (to the arch-specific code) dwarf2_frame_cache struct.

With the trad_frame_saved_reg array in hand, the arch-specific dwarf2 unwinder can cache some useful data for registers that don't have dwarf id's.

> 
> 
> Don't get me wrong, I'm not against defining the cache types in the public namespace. By doing so, it will have a large impact on the GDB code base and it will require a lot more testing than the change I'm proposing.

If we define a new HOW for this, the changes will be easier to test. Code using the non-cached version of the callback HOW will behave the same as before.

We could potentially have a single callback HOW and teach gdb how to detect that an arch-specific cache is being used. But I think that is beyond the scope of fixing the performance issue here.

> - Is it work the extra risk?
> - Who can actually do the testing on all the targets to make sure that nothing broke?
> 
> Kind regards,
> Torbjörn
> 
> 
>>
>>>
>>>
>>>> It would be nice to have some opinions from others, so we can potentially shape this in a way that makes it useful for the general case.
>>>
>>> Yes. Please give me some more feedback to work on!
>>>
>>> Kind regards,
>>> Torbjörn
>>
>> regards
>>      Tomas


  parent reply	other threads:[~2022-12-08  1:11 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-18 15:52 [PATCH 0/4] v2 gdb/arm: Fixes for Cortex-M stack unwinding Torbjörn SVENSSON
2022-11-18 15:52 ` [PATCH v2 1/4] gdb/arm: Update active msp/psp when switching stack Torbjörn SVENSSON
2022-11-21 14:04   ` Luis Machado
2022-11-18 15:52 ` [PATCH v2 2/4] gdb/arm: Ensure that stack pointers are in sync Torbjörn SVENSSON
2022-11-21 14:04   ` Luis Machado
2022-11-18 15:52 ` [PATCH v2 3/4] gdb: dwarf2 generic implementation for caching function data Torbjörn SVENSSON
2022-11-18 16:01   ` Torbjorn SVENSSON
2022-12-20 21:04     ` Tom Tromey
2022-11-21 21:16   ` Luis Machado
2022-11-29 15:19     ` Torbjorn SVENSSON
2022-11-29 16:24       ` Tomas Vanek
2022-11-30 10:16         ` Torbjorn SVENSSON
2022-11-30 10:19           ` Luis Machado
2022-12-08  1:11           ` Luis Machado [this message]
2022-12-19 19:28     ` [PING] " Torbjorn SVENSSON
2022-12-20 21:02   ` Tom Tromey
2022-12-28 16:16     ` Torbjorn SVENSSON
2023-01-05 20:53       ` Torbjorn SVENSSON
2023-01-14  6:54       ` Joel Brobecker
2023-01-18 18:47   ` Tom Tromey
2023-01-19 10:31     ` Torbjorn SVENSSON
2022-11-18 15:52 ` [PATCH v2 4/4] gdb/arm: Use new dwarf2 function cache Torbjörn SVENSSON
2022-11-21 21:04   ` Luis Machado
2022-11-29 15:19     ` Torbjorn SVENSSON

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=83c964b3-a618-433d-c87f-69c1c0a34476@arm.com \
    --to=luis.machado@arm.com \
    --cc=gdb-patches@sourceware.org \
    --cc=torbjorn.svensson@foss.st.com \
    --cc=vanekt@volny.cz \
    --cc=yvan.roux@foss.st.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).