From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-io1-xd32.google.com (mail-io1-xd32.google.com [IPv6:2607:f8b0:4864:20::d32]) by sourceware.org (Postfix) with ESMTPS id D331B3858CDA for ; Tue, 10 Jan 2023 18:33:50 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org D331B3858CDA Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=adacore.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=adacore.com Received: by mail-io1-xd32.google.com with SMTP id z71so302033iof.11 for ; Tue, 10 Jan 2023 10:33:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=adacore.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KeV1bWAdoQUzhrdQ+PUXRn0w9106CLPMKm+N+JYjUN0=; b=ccJRl6cIfo2tMlS48IIqE0BX+N1hlvCsx4TkN4FhwarK8tuNDqUD1IKfI31yGpzvkx 2e2NPh/7ywwG0TlhpmR5XXaMbUSi3GLT9O7jn7HfPvi3r8u/rLH+QVlKRKnBQFLeA1IS 7Du30RJl6VZGn+nZq5iDGiTpfoqidprFeIf0SDdUaHh08mpzAaqtkRoAtAOMv52abimz zAXK9VMcTLD6P5LVN1WHzuEFGZoPK2hLdWFXhDp3dIxxymHfgd4B342LMyVAalDCWnb6 yrRcaWTrLbmikKvHRFCMnviWelxt87vGAJWSUV5DNc9Hoy34XQabD2Ww9NPyvh/fB5Pc GWaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KeV1bWAdoQUzhrdQ+PUXRn0w9106CLPMKm+N+JYjUN0=; b=lF3VXQqXBSiq2Vh/lIJZjAVqJnaO4SYI6ZF5PLo/T9IhUIf3StWumXO3QXqk3M7sL3 0RqM7sKzmRPSojnd38YQNtCg9aN+s6SiWCWFidjX7vGfaXjvwMxalav7MdJQSGxt17vV 9XeHdYAOsaPIr6RMMDo+a+f/iAH8Nx0VDYXo3mXqLnDNKBkvT6yvi4jT6/uDaUC1KwDs rwEd1fhHC5mxlTgXbKnGrnUXXSSNJcEeCSoBAOuDfTp5seI2wLUL9nqvFJC5S1NcBZES p1KPIoeMVr2tj43zp4NrUy/Rfmwj9qkJ0C8xdcPIXIuC2QhrumiaeqsXM/L7JHNvsbVL djuA== X-Gm-Message-State: AFqh2krPt6gI1fOn6kWFYk8ca+HJXmOTQVZIdsyC7DmMxqbd/EoTxHdl J/Oc+q+uT6TVUhmNwJ3h2bdpXELKnUlxWYHs X-Google-Smtp-Source: AMrXdXs9ZEyrtzITTU1T7ZS5v5plvc5bEMSjp4ahH9DckzMzzBFt336UjYj/wsSZLYk/GxHWxLfSow== X-Received: by 2002:a05:6602:124b:b0:6e2:d3f9:56d with SMTP id o11-20020a056602124b00b006e2d3f9056dmr54077745iou.0.1673375630112; Tue, 10 Jan 2023 10:33:50 -0800 (PST) Received: from localhost.localdomain (97-122-76-186.hlrn.qwest.net. [97.122.76.186]) by smtp.gmail.com with ESMTPSA id w20-20020a05663800d400b00375783003fcsm3739933jao.136.2023.01.10.10.33.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Jan 2023 10:33:49 -0800 (PST) From: Tom Tromey To: gdb-patches@sourceware.org Cc: Tom Tromey Subject: [PATCH v2 1/4] Avoid submitting empty tasks in parallel_for_each Date: Tue, 10 Jan 2023 11:33:35 -0700 Message-Id: <20230110183338.2623088-2-tromey@adacore.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230110183338.2623088-1-tromey@adacore.com> References: <20230110183338.2623088-1-tromey@adacore.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-11.3 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,GIT_PATCH_0,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: I found that parallel_for_each would submit empty tasks to the thread pool. For example, this can happen if the number of tasks is smaller than the number of available threads. In the DWARF reader, this resulted in the cooked index containing empty sub-indices. This patch arranges to instead shrink the result vector and process the trailing entries in the calling thread. --- gdb/unittests/parallel-for-selftests.c | 39 ++++++++++++++++++++++++++ gdbsupport/parallel-for.h | 30 ++++++++++++++++++++ 2 files changed, 69 insertions(+) diff --git a/gdb/unittests/parallel-for-selftests.c b/gdb/unittests/parallel-for-selftests.c index 3162db18df1..15a095ae62b 100644 --- a/gdb/unittests/parallel-for-selftests.c +++ b/gdb/unittests/parallel-for-selftests.c @@ -149,6 +149,45 @@ TEST (int n_threads) SELF_CHECK (counter == NUMBER); #undef NUMBER + + /* Check that if there are fewer tasks than threads, then we won't + end up with a null result. */ + std::vector> intresults; + std::atomic any_empty_tasks (false); + + FOR_EACH (1, 0, 1, + [&] (int start, int end) + { + if (start == end) + any_empty_tasks = true; + return std::unique_ptr (new int (end - start)); + }); + SELF_CHECK (!any_empty_tasks); + SELF_CHECK (std::all_of (intresults.begin (), + intresults.end (), + [] (const std::unique_ptr &entry) + { + return entry != nullptr; + })); + + /* The same but using the task size parameter. */ + intresults.clear (); + any_empty_tasks = false; + FOR_EACH (1, 0, 1, + [&] (int start, int end) + { + if (start == end) + any_empty_tasks = true; + return std::unique_ptr (new int (end - start)); + }, + task_size_one); + SELF_CHECK (!any_empty_tasks); + SELF_CHECK (std::all_of (intresults.begin (), + intresults.end (), + [] (const std::unique_ptr &entry) + { + return entry != nullptr; + })); } #endif /* FOR_EACH */ diff --git a/gdbsupport/parallel-for.h b/gdbsupport/parallel-for.h index b565676a0d0..de9ebb15746 100644 --- a/gdbsupport/parallel-for.h +++ b/gdbsupport/parallel-for.h @@ -70,6 +70,12 @@ struct par_for_accumulator return result; } + /* Resize the results to N. */ + void resize (size_t n) + { + m_futures.resize (n); + } + private: /* A vector of futures coming from the tasks run in the @@ -108,6 +114,12 @@ struct par_for_accumulator } } + /* Resize the results to N. */ + void resize (size_t n) + { + m_futures.resize (n); + } + private: std::vector> m_futures; @@ -232,6 +244,24 @@ parallel_for_each (unsigned n, RandomIt first, RandomIt last, end = j; remaining_size -= chunk_size; } + + /* This case means we don't have enough elements to really + distribute them. Rather than ever submit a task that does + nothing, we short-circuit here. */ + if (first == end) + end = last; + + if (end == last) + { + /* We're about to dispatch the last batch of elements, which + we normally process in the main thread. So just truncate + the result list here. This avoids submitting empty tasks + to the thread pool. */ + count = i; + results.resize (count); + break; + } + if (parallel_for_each_debug) { debug_printf (_("Parallel for: elements on worker thread %i\t: %zu"), -- 2.38.1