public inbox for gdb-patches@sourceware.org
 help / color / mirror / Atom feed
From: Tomas Vanek <vanekt@volny.cz>
To: Torbjorn SVENSSON <torbjorn.svensson@foss.st.com>,
	Luis Machado <luis.machado@arm.com>,
	gdb-patches@sourceware.org
Cc: Yvan Roux <yvan.roux@foss.st.com>
Subject: Re: [PATCH v2 3/4] gdb: dwarf2 generic implementation for caching function data
Date: Tue, 29 Nov 2022 17:24:27 +0100	[thread overview]
Message-ID: <c7582245-4639-246d-fa2e-9d61f3e1085f@volny.cz> (raw)
In-Reply-To: <4461f93e-b29d-80bc-3249-04cbcb94be99@foss.st.com>

Hi Torbjorn,

On 29/11/2022 16:19, Torbjorn SVENSSON wrote:
> Hi,
>
> I've had a long discussion with Luis on IRC regarding the points 
> mentioned here, but I'll reply to the list now in order to get more 
> eyes on the topic.
>
>
> On 2022-11-21 22:16, Luis Machado wrote:
>> Hi,
>>
>> On 11/18/22 15:52, Torbjörn SVENSSON wrote:
>>> When there is no dwarf2 data for a register, a function can be called
>>> to provide the value of this register.  In some situations, it might
>>> not be trivial to determine the value to return and it would cause a
>>> performance bottleneck to do the computation each time.
>>>
>>> This patch allows the called function to have a "cache" object that it
>>> can use to store some metadata between calls to reduce the performance
>>> impact of the complex logic.
>>>
>>> The cache object is unique for each function and frame, so if there are
>>> more than one function pointer stored in the dwarf2_frame_cache->reg
>>> array, then the appropriate pointer will be supplied (the type is not
>>> known by the dwarf2 implementation).
>>>
>>> dwarf2_frame_get_fn_data can be used to retrieve the function unique
>>> cache object.
>>> dwarf2_frame_allocate_fn_data can be used to allocate and retrieve the
>>> function unqiue cache object.
>>
>> unqiue -> unique
>>
>>>
>>> Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@foss.st.com>
>>> Signed-off-by: Yvan Roux <yvan.roux@foss.st.com>
>>> ---
>>>   gdb/dwarf2/frame.c | 48 
>>> ++++++++++++++++++++++++++++++++++++++++++++++
>>>   gdb/dwarf2/frame.h | 20 +++++++++++++++++--
>>>   2 files changed, 66 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/gdb/dwarf2/frame.c b/gdb/dwarf2/frame.c
>>> index 3f884abe1d5..bff3b706e7e 100644
>>> --- a/gdb/dwarf2/frame.c
>>> +++ b/gdb/dwarf2/frame.c
>>> @@ -831,6 +831,14 @@ dwarf2_fetch_cfa_info (struct gdbarch *gdbarch, 
>>> CORE_ADDR pc,
>>>   }
>>>   \f
>>> +struct dwarf2_frame_fn_data
>>> +{
>>> +  struct value *(*fn) (frame_info_ptr this_frame, void **this_cache,
>>> +               int regnum);
>>> +  void *data;
>>> +  struct dwarf2_frame_fn_data* next;
>>> +};
>>> +
>>
>> I'm wondering if we really need to have a function pointer here. 
>> Isn't the cache supposed to be frame-wide and not
>> function-specific?
>>
>> If we don't need it, the cache just becomes an opaque data pointer.
>>
>>>   struct dwarf2_frame_cache
>>>   {
>>>     /* DWARF Call Frame Address.  */
>>> @@ -862,6 +870,8 @@ struct dwarf2_frame_cache
>>>        dwarf2_tailcall_frame_unwind unwinder so this field does not 
>>> apply for
>>>        them.  */
>>>     void *tailcall_cache;
>>> +
>>> +  struct dwarf2_frame_fn_data *fn_data;
>>>   };
>>>   static struct dwarf2_frame_cache *
>>> @@ -1221,6 +1231,44 @@ dwarf2_frame_prev_register (frame_info_ptr 
>>> this_frame, void **this_cache,
>>>       }
>>>   }
>>> +void *dwarf2_frame_get_fn_data (frame_info_ptr this_frame, void 
>>> **this_cache,
>>> +                fn_prev_register fn)
>>> +{
>>> +  struct dwarf2_frame_fn_data *fn_data = nullptr;
>>> +  struct dwarf2_frame_cache *cache
>>> +    = dwarf2_frame_cache (this_frame, this_cache);
>>> +
>>> +  /* Find the object for the function.  */
>>> +  for (fn_data = cache->fn_data; fn_data; fn_data = fn_data->next)
>>> +    if (fn_data->fn == fn)
>>> +      return fn_data->data;
>>> +
>>> +  return nullptr;
>>> +}
>>> +
>>> +void *dwarf2_frame_allocate_fn_data (frame_info_ptr this_frame,
>>> +                     void **this_cache,
>>> +                     fn_prev_register fn, unsigned long size)
>>> +{
>>> +  struct dwarf2_frame_fn_data *fn_data = nullptr;
>>> +  struct dwarf2_frame_cache *cache
>>> +    = dwarf2_frame_cache (this_frame, this_cache);
>>> +
>>> +  /* First try to find an existing object.  */
>>> +  void *data = dwarf2_frame_get_fn_data (this_frame, this_cache, fn);
>>> +  if (data)
>>> +    return data;
>>> +
>>> +  /* No object found, lets create a new instance.  */
>>> +  fn_data = FRAME_OBSTACK_ZALLOC (struct dwarf2_frame_fn_data);
>>> +  fn_data->fn = fn;
>>> +  fn_data->data = frame_obstack_zalloc (size);
>>> +  fn_data->next = cache->fn_data;
>>> +  cache->fn_data = fn_data;
>>> +
>>> +  return fn_data->data;
>>> +}
>>
>> And if we only have a data pointer, we can return a reference to it 
>> through the argument, and then DWARF can cache it.
>>
>> We could even have a destructor/cleanup that can get called once the 
>> frames are destroyed.
>
> I don't think we can do that without introducing a lot more changes to 
> the common code. My changes are designed in a way that would only have 
> an impact on arm (as they are the only users for the functionality 
> right now) and not for every target out there that GDB supports. If 
> going for a simpler solution, it would mean that every target needs to 
> be re-tested in order to get the confirmation that the implementation 
> would not break some other target.
>
>
>>
>>> +
>>>   /* Proxy for tailcall_frame_dealloc_cache for bottom frame of a 
>>> virtual tail
>>>      call frames chain.  */
>>> diff --git a/gdb/dwarf2/frame.h b/gdb/dwarf2/frame.h
>>> index 06c8a10c178..444afd9f8eb 100644
>>> --- a/gdb/dwarf2/frame.h
>>> +++ b/gdb/dwarf2/frame.h
>>> @@ -66,6 +66,9 @@ enum dwarf2_frame_reg_rule
>>>   /* Register state.  */
>>> +typedef struct value *(*fn_prev_register) (frame_info_ptr this_frame,
>>> +                       void **this_cache, int regnum);
>>> +
>>>   struct dwarf2_frame_state_reg
>>>   {
>>>     /* Each register save state can be described in terms of a CFA 
>>> slot,
>>> @@ -78,8 +81,7 @@ struct dwarf2_frame_state_reg
>>>         const gdb_byte *start;
>>>         ULONGEST len;
>>>       } exp;
>>> -    struct value *(*fn) (frame_info_ptr this_frame, void **this_cache,
>>> -             int regnum);
>>> +    fn_prev_register fn;
>>>     } loc;
>>>     enum dwarf2_frame_reg_rule how;
>>>   };
>>> @@ -262,4 +264,18 @@ extern int dwarf2_fetch_cfa_info (struct 
>>> gdbarch *gdbarch, CORE_ADDR pc,
>>>                     const gdb_byte **cfa_start_out,
>>>                     const gdb_byte **cfa_end_out);
>>> +
>>> +/* Allocate a new instance of the function unique data.  */
>>> +
>>> +extern void *dwarf2_frame_allocate_fn_data (frame_info_ptr this_frame,
>>> +                        void **this_cache,
>>> +                        fn_prev_register fn,
>>> +                        unsigned long size);
>>> +
>>> +/* Retrieve the function unique data for this frame.  */
>>> +
>>> +extern void *dwarf2_frame_get_fn_data (frame_info_ptr this_frame,
>>> +                       void **this_cache,
>>> +                       fn_prev_register fn);
>>> +
>>>   #endif /* dwarf2-frame.h */
>>
>> As we've discussed before, I think the cache idea is nice if we have 
>> to deal with targets with multiple CFA's (in our case, we have either 
>> 4 SP's or 2 SP's, plus aliases).
>>
>> DWARF doesn't seem to support this at the moment, and the function 
>> HOW for DWARF is not smart enough to remember a previously-fetched 
>> value. So it seems we have room
>> for some improvement, unless there is enough reason elsewhere about 
>> why we shouldn't have a cache.
>
>
> This patch does not provide a cache or anything, it just provides a 
> way for the callback function to save some additional data between 
> calls for the same frame.
> The code above is generic in the way that it has one data object per 
> function and frame. The reason for this implementation is that it's 
> rather easy to ensure that the data object is okay for the function 
> that uses it without any inter-dependencies with some other function 
> that might be called for some other register on the same frame. You 
> could even consider having a shared function to be a callback 
> function. In the case of a shared function, that would mean that the 
> object would be large and public and then it would simply make more 
> sense to make the dwarf2 object public and extend it instead.
> My approach ensures that each callback function has its own data and 
> the data structure is "private" to the function. It's possible to 
> share the struct for the data object between 2 functions, but it's not 
> possible to share the instance of the struct between 2 functions.

Sorry, the per-function-pointers looks like an overkill to me.
Maybe I'm just an old school programmer and don't like associative arrays...
- frame unwinders use a generic pointer and ensuring the proper type 
cast is fully in responsibility of the implementation.
- we need just to replicate the similar functionality for architecture 
dependent handling of dwarf2 frames
- functions assigned to a dwarf2 frame by how = DWARF2_FRAME_REG_FN are 
never isolated functions from different parts
of code: a gdbarch can set only one initializer by 
dwarf2_frame_set_init_reg() and it sets all functions
- if we ever have more than one function assigned in one dwarf2 frame, 
seems me likely that all functions would prefer a single cache over 
isolated ones

>
>
>> It would be nice to have some opinions from others, so we can 
>> potentially shape this in a way that makes it useful for the general 
>> case.
>
> Yes. Please give me some more feedback to work on!
>
> Kind regards,
> Torbjörn

regards
     Tomas

  reply	other threads:[~2022-11-29 16:24 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-18 15:52 [PATCH 0/4] v2 gdb/arm: Fixes for Cortex-M stack unwinding Torbjörn SVENSSON
2022-11-18 15:52 ` [PATCH v2 1/4] gdb/arm: Update active msp/psp when switching stack Torbjörn SVENSSON
2022-11-21 14:04   ` Luis Machado
2022-11-18 15:52 ` [PATCH v2 2/4] gdb/arm: Ensure that stack pointers are in sync Torbjörn SVENSSON
2022-11-21 14:04   ` Luis Machado
2022-11-18 15:52 ` [PATCH v2 3/4] gdb: dwarf2 generic implementation for caching function data Torbjörn SVENSSON
2022-11-18 16:01   ` Torbjorn SVENSSON
2022-12-20 21:04     ` Tom Tromey
2022-11-21 21:16   ` Luis Machado
2022-11-29 15:19     ` Torbjorn SVENSSON
2022-11-29 16:24       ` Tomas Vanek [this message]
2022-11-30 10:16         ` Torbjorn SVENSSON
2022-11-30 10:19           ` Luis Machado
2022-12-08  1:11           ` Luis Machado
2022-12-19 19:28     ` [PING] " Torbjorn SVENSSON
2022-12-20 21:02   ` Tom Tromey
2022-12-28 16:16     ` Torbjorn SVENSSON
2023-01-05 20:53       ` Torbjorn SVENSSON
2023-01-14  6:54       ` Joel Brobecker
2023-01-18 18:47   ` Tom Tromey
2023-01-19 10:31     ` Torbjorn SVENSSON
2022-11-18 15:52 ` [PATCH v2 4/4] gdb/arm: Use new dwarf2 function cache Torbjörn SVENSSON
2022-11-21 21:04   ` Luis Machado
2022-11-29 15:19     ` Torbjorn SVENSSON

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c7582245-4639-246d-fa2e-9d61f3e1085f@volny.cz \
    --to=vanekt@volny.cz \
    --cc=gdb-patches@sourceware.org \
    --cc=luis.machado@arm.com \
    --cc=torbjorn.svensson@foss.st.com \
    --cc=yvan.roux@foss.st.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).