public inbox for binutils@sourceware.org
 help / color / mirror / Atom feed
* [PATCH] bfd: add new bfd_cache_size() function
@ 2023-10-06 16:03 Andrew Burgess
  2023-10-06 21:09 ` Torbjorn SVENSSON
  2023-10-09 14:01 ` Nick Clifton
  0 siblings, 2 replies; 6+ messages in thread
From: Andrew Burgess @ 2023-10-06 16:03 UTC (permalink / raw)
  To: binutils; +Cc: Andrew Burgess

In GDB we have a problem with the BFD cache.

As GDB runs for a potentially extended period of time, if the BFD
cache holds a file descriptor for an open on-disk file, this can, on
some targets (e.g. Win32) prevent the OS writing to the file.

This might, for example, prevent a user from recompiling their
executable as GDB is (via the BFD cache) holding an open reference to
that file.

Another problem, relates to bfd_stat, for BFDs that are using the BFD
cache (i.e. they call cache_bstat to implement bfd_stat).  The
cache_bstat function finds the BFD in the cache, opening the file if
needed, and then uses fstat on the open file descriptor.

What this means is that, if the on-disk file changes, but the cache
was holding an open reference to the file, the bfd_stat will return
the 'struct stat' for the old file, not the new file.

Now, for this second problem, we might be tempted to make use of an
actual stat call, instead of calling bfd_stat, however, this isn't
ideal as we have some BFDs that use a custom iovec, and implement the
various functions over GDB's remote protocol.  By using bfd_stat we
can have a single call that should work for both local files, and for
remote files.

To solve both of these problems GDB has calls to bfd_cache_close_all
sprinkled around its code base.  And in theory this should work fine.

However, I recently ran into a case where we had missed a
bfd_cache_close_all call, and as a result some BFDs were held open.
This caused a bfd_stat call to return an unexpected result (old file
vs new file).

What I'd like is some way within GDB that I can do:

  gdb_assert ( /* Nothing is held open in the cache.  */ );

As this would allow GDB to quickly identify when we've missed some
bfd_cache_close_all calls.

And so, to support this, I would like to add a new bfd_cache_size
function.  This function returns an integer, which is the number of
open files in the cache.  I can then start adding:

  gdb_assert (bfd_cache_size() == 0);

to GDB in some strategic spots, and start fixing all of the missing
bfd_cache_close_all calls that crop up as a result.
---
 bfd/bfd-in2.h |  2 ++
 bfd/cache.c   | 17 +++++++++++++++++
 2 files changed, 19 insertions(+)

diff --git a/bfd/bfd-in2.h b/bfd/bfd-in2.h
index eddb9902f5e..8ac7a2f2f8e 100644
--- a/bfd/bfd-in2.h
+++ b/bfd/bfd-in2.h
@@ -2790,6 +2790,8 @@ bool bfd_cache_close (bfd *abfd);
 
 bool bfd_cache_close_all (void);
 
+int bfd_cache_size (void);
+
 /* Extracted from compress.c.  */
 /* Types of compressed DWARF debug sections.  */
 enum compressed_debug_section_type
diff --git a/bfd/cache.c b/bfd/cache.c
index 357a38da599..8db42a26e66 100644
--- a/bfd/cache.c
+++ b/bfd/cache.c
@@ -574,6 +574,23 @@ bfd_cache_close_all (void)
   return ret;
 }
 
+/*
+FUNCTION
+	bfd_cache_size
+
+SYNOPSIS
+	int bfd_cache_size (void);
+
+DESCRIPTION
+	Return the number of open files in the cache.
+*/
+
+int
+bfd_cache_size (void)
+{
+  return open_files;
+}
+
 /*
 INTERNAL_FUNCTION
 	bfd_open_file

base-commit: 9a896be33224654760c46d3698218241d0a1f354
-- 
2.25.4


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2023-10-12 13:00 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-06 16:03 [PATCH] bfd: add new bfd_cache_size() function Andrew Burgess
2023-10-06 21:09 ` Torbjorn SVENSSON
2023-10-09 14:01 ` Nick Clifton
2023-10-11 13:50   ` Andrew Burgess
2023-10-12 12:32     ` Nick Clifton
2023-10-12 13:00       ` Andrew Burgess

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).