From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com [IPv6:2a00:1450:4864:20::32f]) by sourceware.org (Postfix) with ESMTPS id 608733858C50 for ; Sat, 14 Jan 2023 06:03:56 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 608733858C50 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=adacore.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=adacore.com Received: by mail-wm1-x32f.google.com with SMTP id m3so16503947wmq.0 for ; Fri, 13 Jan 2023 22:03:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=adacore.com; s=google; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=67V20lja6C0zu/yW476OYDk83ZVfB7t65cGjXFYh6FY=; b=DcoiVZ5cOVi4Bt5ARVYXOnbuK/ix7jWfunXkE0DzulmEmZLXj8pLeRZXNdYhf033u4 tuHmsxbU2+qwhRwuBTwgGW6dKI9nZVot9qlSoLjQfXBaqJEH9unDmY1PuWspun7CndCm DWtsXJQY8EoQImQ52rlScMch089PJqHLnMWPCI8yxnMs8/GNDRMm2sxuz03TdQ4yO77P cfieuG1aVWbNvuxVdPGA4ax61Z4enqO4kcIb5XChE37OnVUhddHis2+16ih7WjvhqfQw 1KE64ZZgW86cMO98LHM6ksZd8JlcC4g/xu5XSm2AaIc8pHckAMPvjoUx1VnjCxD+lAop rGbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=67V20lja6C0zu/yW476OYDk83ZVfB7t65cGjXFYh6FY=; b=c4eDnazVHGLMwnJ35eZ0Bklwl4n5KGSgqATdVkpoRMaWXAAEgtusHC/3olps7y4pUL /sqXE38+vCEsVj1xKNS5t2kZEDabHeHNRktccZZOUU8ciUJmYpoIIJ517JWGb0DAybMi e0bzxx5Q9d3VIUS+wzKiESn7NSDr7DX+Y/teGrt7+hWjX15dQmftdDMKmM56AS5zMajI 3iTjSInBHjNcFGn0skfkQ/TxZdzUGdbtFlxtuCMdPhRl5Q52BoXg9bkhUWb5LyypLtEM 2hnxH4DqLPnRHEMRbMZKVW7XHBncykJLSRK4h/x31TNi3qALsgR7EMjaIUeM7kDjVgZo XO4A== X-Gm-Message-State: AFqh2kocVRprbpNr5FkCk20HGQhDCprRoru7qbdsKdoPrE2ypUtdC2xm bsz27fYHCEJ92mXsFU/bhNmwtf8SEgufkBI= X-Google-Smtp-Source: AMrXdXsKbjTwWM294HSlCxD8yig6OFe6esotbn6+2Hhb+ABa0QDAQMZI6b1wajbUcJCm8rbIVm2ftw== X-Received: by 2002:a7b:c042:0:b0:3d3:3d51:7d4b with SMTP id u2-20020a7bc042000000b003d33d517d4bmr1691352wmc.29.1673676234801; Fri, 13 Jan 2023 22:03:54 -0800 (PST) Received: from takamaka.gnat.com ([2a01:cb22:1d5:1100:6616:27c7:2935:afbc]) by smtp.gmail.com with ESMTPSA id u8-20020a05600c19c800b003d9780466b0sm29505542wmq.31.2023.01.13.22.03.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 13 Jan 2023 22:03:54 -0800 (PST) Received: by takamaka.gnat.com (Postfix, from userid 1000) id 42EA88219E; Sat, 14 Jan 2023 10:03:52 +0400 (+04) Date: Sat, 14 Jan 2023 10:03:52 +0400 From: Joel Brobecker To: Tom Tromey via Gdb-patches Cc: Tom Tromey , Joel Brobecker Subject: Re: [PATCH v2 1/4] Avoid submitting empty tasks in parallel_for_each Message-ID: References: <20230110183338.2623088-1-tromey@adacore.com> <20230110183338.2623088-2-tromey@adacore.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230110183338.2623088-2-tromey@adacore.com> X-Spam-Status: No, score=-11.4 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,GIT_PATCH_0,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Hi Tom, On Tue, Jan 10, 2023 at 11:33:35AM -0700, Tom Tromey via Gdb-patches wrote: > I found that parallel_for_each would submit empty tasks to the thread > pool. For example, this can happen if the number of tasks is smaller > than the number of available threads. In the DWARF reader, this > resulted in the cooked index containing empty sub-indices. This patch > arranges to instead shrink the result vector and process the trailing > entries in the calling thread. Thanks for the updated patch, and the added test. This patch looks OK to me. > --- > gdb/unittests/parallel-for-selftests.c | 39 ++++++++++++++++++++++++++ > gdbsupport/parallel-for.h | 30 ++++++++++++++++++++ > 2 files changed, 69 insertions(+) > > diff --git a/gdb/unittests/parallel-for-selftests.c b/gdb/unittests/parallel-for-selftests.c > index 3162db18df1..15a095ae62b 100644 > --- a/gdb/unittests/parallel-for-selftests.c > +++ b/gdb/unittests/parallel-for-selftests.c > @@ -149,6 +149,45 @@ TEST (int n_threads) > SELF_CHECK (counter == NUMBER); > > #undef NUMBER > + > + /* Check that if there are fewer tasks than threads, then we won't > + end up with a null result. */ > + std::vector> intresults; > + std::atomic any_empty_tasks (false); > + > + FOR_EACH (1, 0, 1, > + [&] (int start, int end) > + { > + if (start == end) > + any_empty_tasks = true; > + return std::unique_ptr (new int (end - start)); > + }); > + SELF_CHECK (!any_empty_tasks); > + SELF_CHECK (std::all_of (intresults.begin (), > + intresults.end (), > + [] (const std::unique_ptr &entry) > + { > + return entry != nullptr; > + })); > + > + /* The same but using the task size parameter. */ > + intresults.clear (); > + any_empty_tasks = false; > + FOR_EACH (1, 0, 1, > + [&] (int start, int end) > + { > + if (start == end) > + any_empty_tasks = true; > + return std::unique_ptr (new int (end - start)); > + }, > + task_size_one); > + SELF_CHECK (!any_empty_tasks); > + SELF_CHECK (std::all_of (intresults.begin (), > + intresults.end (), > + [] (const std::unique_ptr &entry) > + { > + return entry != nullptr; > + })); > } > > #endif /* FOR_EACH */ > diff --git a/gdbsupport/parallel-for.h b/gdbsupport/parallel-for.h > index b565676a0d0..de9ebb15746 100644 > --- a/gdbsupport/parallel-for.h > +++ b/gdbsupport/parallel-for.h > @@ -70,6 +70,12 @@ struct par_for_accumulator > return result; > } > > + /* Resize the results to N. */ > + void resize (size_t n) > + { > + m_futures.resize (n); > + } > + > private: > > /* A vector of futures coming from the tasks run in the > @@ -108,6 +114,12 @@ struct par_for_accumulator > } > } > > + /* Resize the results to N. */ > + void resize (size_t n) > + { > + m_futures.resize (n); > + } > + > private: > > std::vector> m_futures; > @@ -232,6 +244,24 @@ parallel_for_each (unsigned n, RandomIt first, RandomIt last, > end = j; > remaining_size -= chunk_size; > } > + > + /* This case means we don't have enough elements to really > + distribute them. Rather than ever submit a task that does > + nothing, we short-circuit here. */ > + if (first == end) > + end = last; > + > + if (end == last) > + { > + /* We're about to dispatch the last batch of elements, which > + we normally process in the main thread. So just truncate > + the result list here. This avoids submitting empty tasks > + to the thread pool. */ > + count = i; > + results.resize (count); > + break; > + } > + > if (parallel_for_each_debug) > { > debug_printf (_("Parallel for: elements on worker thread %i\t: %zu"), > -- > 2.38.1 > -- Joel