public inbox for gdb-cvs@sourceware.org
help / color / mirror / Atom feed
* [binutils-gdb] [gdbsupport] Add parallel_for_each_debug
@ 2022-07-18  3:34 Tom de Vries
  0 siblings, 0 replies; only message in thread
From: Tom de Vries @ 2022-07-18  3:34 UTC (permalink / raw)
  To: gdb-cvs

https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=53944a3bf51cdff9ad30a0c3740b8124213fdab9

commit 53944a3bf51cdff9ad30a0c3740b8124213fdab9
Author: Tom de Vries <tdevries@suse.de>
Date:   Mon Jul 18 05:34:01 2022 +0200

    [gdbsupport] Add parallel_for_each_debug
    
    Add a parallel_for_each_debug variable, set to false by default.
    
    With an a.out compiled from hello world, we get with
    parallel_for_each_debug == true:
    ...
    $ gdb -q -batch a.out -ex start
      ...
    Parallel for: n_elements: 7271
    Parallel for: minimum elements per thread: 10
    Parallel for: elts_per_thread: 1817
    Parallel for: elements on worker thread 0       : 1817
    Parallel for: elements on worker thread 1       : 1817
    Parallel for: elements on worker thread 2       : 1817
    Parallel for: elements on worker thread 3       : 0
    Parallel for: elements on main thread           : 1820
    
    Temporary breakpoint 1, main () at /home/vries/hello.c:6
    6         printf ("hello\n");
    ...
    
    Tested on x86_64-linux.

Diff:
---
 gdbsupport/parallel-for.h | 24 +++++++++++++++++++++++-
 1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/gdbsupport/parallel-for.h b/gdbsupport/parallel-for.h
index a614fc35766..cfe8a6e4f09 100644
--- a/gdbsupport/parallel-for.h
+++ b/gdbsupport/parallel-for.h
@@ -139,7 +139,12 @@ parallel_for_each (unsigned n, RandomIt first, RandomIt last,
   using result_type
     = typename std::result_of<RangeFunction (RandomIt, RandomIt)>::type;
 
-  size_t n_threads = thread_pool::g_thread_pool->thread_count ();
+  /* If enabled, print debug info about how the work is distributed across
+     the threads.  */
+  const int parallel_for_each_debug = false;
+
+  size_t n_worker_threads = thread_pool::g_thread_pool->thread_count ();
+  size_t n_threads = n_worker_threads;
   size_t n_elements = last - first;
   size_t elts_per_thread = 0;
   if (n_threads > 1)
@@ -155,9 +160,19 @@ parallel_for_each (unsigned n, RandomIt first, RandomIt last,
   size_t count = n_threads == 0 ? 0 : n_threads - 1;
   gdb::detail::par_for_accumulator<result_type> results (count);
 
+  if (parallel_for_each_debug)
+    {
+      debug_printf (_("Parallel for: n_elements: %zu\n"), n_elements);
+      debug_printf (_("Parallel for: minimum elements per thread: %u\n"), n);
+      debug_printf (_("Parallel for: elts_per_thread: %zu\n"), elts_per_thread);
+    }
+
   for (int i = 0; i < count; ++i)
     {
       RandomIt end = first + elts_per_thread;
+      if (parallel_for_each_debug)
+	debug_printf (_("Parallel for: elements on worker thread %i\t: %zu\n"),
+		      i, (size_t)(end - first));
       results.post (i, [=] ()
         {
 	  return callback (first, end);
@@ -165,7 +180,14 @@ parallel_for_each (unsigned n, RandomIt first, RandomIt last,
       first = end;
     }
 
+  for (int i = count; i < n_worker_threads; ++i)
+    if (parallel_for_each_debug)
+      debug_printf (_("Parallel for: elements on worker thread %i\t: 0\n"), i);
+
   /* Process all the remaining elements in the main thread.  */
+  if (parallel_for_each_debug)
+    debug_printf (_("Parallel for: elements on main thread\t\t: %zu\n"),
+		  (size_t)(last - first));
   return results.finish ([=] ()
     {
       return callback (first, last);


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2022-07-18  3:34 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-18  3:34 [binutils-gdb] [gdbsupport] Add parallel_for_each_debug Tom de Vries

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).