public inbox for gdb-patches@sourceware.org
 help / color / mirror / Atom feed
* [PATCH] gdb: small cleanup in symbol_file_add_with_addrs
@ 2023-09-14 15:49 Andrew Burgess
  2023-09-14 16:17 ` Tom Tromey
  0 siblings, 1 reply; 2+ messages in thread
From: Andrew Burgess @ 2023-09-14 15:49 UTC (permalink / raw)
  To: gdb-patches; +Cc: Andrew Burgess

While looking at how gdb::observers::new_objfile was used, I found
some code in symbol_file_add_with_addrs that I thought could be
improved.

Instead of:

  if (condition)
   {
     ...
     return;
   }

  ...
  return;

Where some parts of '...' identical between the two branches.  I think
it would be nicer if the duplication is removed, and we just use:

  if (!condition)
    ...

to guard the one statement that should only happen when the condition
is not true.

There is one change in this commit though that is (possibly)
significant, there is a call to bfd_cache_close_all() that was only
present in the second block.  After this commit we now call that
function for both paths.

The call to bfd_cache_close_all was added in commit:

  commit ce7d45220e4ed342d4a77fcd2f312e85e1100971
  Date:   Fri Jul 30 12:05:45 2004 +0000

with the purpose of ensuring that GDB doesn't hold the BFDs open
unnecessarily, thus preventing the files from being updated on some
hosts (e.g. Win32).

In the early exit case we previously didn't call bfd_cache_close_all,
with the result that GDB would continue to hold open some BFD objects
longer than needed.

After this commit, but calling bfd_cache_close_all for both paths this
problem is solved.

I'm not sure how this change could be tested, I don't believe there's
any GDB (maintenance) command that displays the BFD cache contents, so
we can't check the cache contents easily.  Ideas are welcome though.
---
 gdb/symfile.c | 11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/gdb/symfile.c b/gdb/symfile.c
index 1b46ec45f2e..c20c0c50b9a 100644
--- a/gdb/symfile.c
+++ b/gdb/symfile.c
@@ -1119,18 +1119,13 @@ symbol_file_add_with_addrs (const gdb_bfd_ref_ptr &abfd, const char *name,
      time.  */
   gdb_flush (gdb_stdout);
 
-  if (objfile->sf == NULL)
-    {
-      gdb::observers::new_objfile.notify (objfile);
-      return objfile;	/* No symbols.  */
-    }
-
-  finish_new_objfile (objfile, add_flags);
+  if (objfile->sf != nullptr)
+    finish_new_objfile (objfile, add_flags);
 
   gdb::observers::new_objfile.notify (objfile);
 
   bfd_cache_close_all ();
-  return (objfile);
+  return objfile;
 }
 
 /* Add BFD as a separate debug file for OBJFILE.  For NAME description

base-commit: d7680f13df105aa8fa0edbdf8efae31a5411f579
-- 
2.25.4


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH] gdb: small cleanup in symbol_file_add_with_addrs
  2023-09-14 15:49 [PATCH] gdb: small cleanup in symbol_file_add_with_addrs Andrew Burgess
@ 2023-09-14 16:17 ` Tom Tromey
  0 siblings, 0 replies; 2+ messages in thread
From: Tom Tromey @ 2023-09-14 16:17 UTC (permalink / raw)
  To: Andrew Burgess via Gdb-patches; +Cc: Andrew Burgess

>>>>> "Andrew" == Andrew Burgess via Gdb-patches <gdb-patches@sourceware.org> writes:

Andrew> I'm not sure how this change could be tested, I don't believe there's
Andrew> any GDB (maintenance) command that displays the BFD cache contents, so
Andrew> we can't check the cache contents easily.  Ideas are welcome though.

Also, the cache isn't really inspectable without reaching into BFD
internals.

This patch looks good to me.
Approved-By: Tom Tromey <tom@tromey.com>

Tom

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2023-09-14 16:17 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-09-14 15:49 [PATCH] gdb: small cleanup in symbol_file_add_with_addrs Andrew Burgess
2023-09-14 16:17 ` Tom Tromey

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).