From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 7896 invoked by alias); 8 Jan 2015 01:55:40 -0000 Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org Received: (qmail 7388 invoked by uid 89); 8 Jan 2015 01:55:38 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.7 required=5.0 tests=AWL,BAYES_00,RCVD_IN_DNSWL_NONE,SPF_PASS autolearn=ham version=3.3.2 X-HELO: relay1.mentorg.com Received: from relay1.mentorg.com (HELO relay1.mentorg.com) (192.94.38.131) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Thu, 08 Jan 2015 01:55:37 +0000 Received: from svr-orw-fem-02x.mgc.mentorg.com ([147.34.96.206] helo=SVR-ORW-FEM-02.mgc.mentorg.com) by relay1.mentorg.com with esmtp id 1Y92Jx-0000o0-PI from Yao_Qi@mentor.com ; Wed, 07 Jan 2015 17:55:33 -0800 Received: from GreenOnly (147.34.91.1) by svr-orw-fem-02.mgc.mentorg.com (147.34.96.168) with Microsoft SMTP Server id 14.3.224.2; Wed, 7 Jan 2015 17:55:33 -0800 From: Yao Qi To: Doug Evans CC: gdb-patches Subject: Re: [RFC] Monster testcase generator for performance testsuite References: <87mw5xuzdc.fsf@codesourcery.com> <871tn7udyt.fsf@codesourcery.com> Date: Thu, 08 Jan 2015 01:55:00 -0000 In-Reply-To: (Doug Evans's message of "Wed, 7 Jan 2015 14:33:18 -0800") Message-ID: <87k30yt4rp.fsf@codesourcery.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-IsSubscribed: yes X-SW-Source: 2015-01/txt/msg00156.txt.bz2 Doug Evans writes: > The point of the 1 second vs 10 second scenario is that the community > may find that 1 second is acceptable (IOW *not* a performance problem > significant enough to address). It'll depend on the situation. > But at scale the performance may be untenable, causing one to want > to rethink one's algorithm or data structure or whatever. Right, the algorithm may be reconsidered when the program goes to large scale. > > Similar issues arise elsewhere btw. > E.g., gdb may handle 10 or 100 threads ok, but how about 1000 threads? Then, I have to run the program with 1000 threads. >>> Similarly, if a change to gdb increases memory usage by 40MB is that ok? >>> Maybe. And if my users see that increase become 400MB is that still ok? >>> Possibly (depending on the nature of the change). But, again, one of my >>> goals here is to have in place mechanisms to find out sooner than later. >>> >> >> Similarly, if 40MB memory usage increase is sufficient to show the >> performance problem, why do we still have to use a bigger one? >> >> Perf test case is used to demonstrate the real performance problems in >> some super large programs, but it doesn't mean the perf test case should >> be as big as these super large programs. > > One may think 40MB is a reasonable price to pay for some change > or some new feature. But at scale that price may become unbearable. > So, yes, we do need perf testcases that let one exercise gdb at scale. Hmmm, that makes sense to me. > >>>>> These tests currently require separate build-perf and check-perf step= s, >>>>> which is different from normal perf tests. However, due to the time >>>>> it takes to build the program I've added support for building the pie= ces >>>>> of the test in parallel, and hooking this parallel build support into >>>>> the existing framework required some pragmatic compromise. >>>> >>>> ... so the parallel build part may not be needed. >>> >>> I'm not sure what the hangup is on supporting parallel builds here. >>> Can you elaborate? It's really not that much code, and while I could >> >> I'd like keep gdb perf test simple. > > How simple? What about parallel builds adds too much complexity? > make check-parallel adds complexity, but I'm guessing no one is > advocating removing it, or was advocating against checking it in. > Well, 'make check-parallel' is useful and parallel build in perf test case generator is useful too. However at first I feel that parallel build in perf test case generator is a plus, not a must. I thought we could have a perf test case generator without parallel build. >>>> It looks like a monster rather than a perf test case :) >>> >>> Depends. How long do your users still wait for gdb to do something? >>> My users are still waiting too long for several things (e.g., startup t= ime). >>> And I want to be able to measure what my users see. >>> And I want to be able to provide upstream with demonstrations of that. >>> >> >> IMO, your expectation is beyond the scope or the purpose perf test >> case. The purpose of each perf test case is to make sure there is no >> performance regression and to expose performance problems as code >> evolves. > > It's precisely within the scope and purpose of the perf testsuite! > We need to measure how well gdb will work on real programs, > and make sure changes introduced don't adversely affect such programs. > How do you know a feature/change/improvement will work at scale unless > you test it at scale? > We should test it at scale. >> Each perf test case is to measure the >> performance on gdb on a certain path, so it doesn't have to behave >> exactly the same as the application users are debugging. >> >>>> It is good to >>>> have a small version enabled by default, which requires less than 1 G, >>>> for example, to run it under GDB. How much time it takes to compile >>>> (sequential build) and run the small version? >>> >>> There are mechanisms in place to control the amount of parallelism. >>> One could make it part of the test spec, but I'm not sure it'd be useful >>> enough. Thus I think there's no need to compile small testcases >>> serially. >>> >> >> Is it possible (or necessary) that we divide it to two parts, 1) perf >> test case generator and 2) parallel build? As we increase the size >> generated perf test cases, the long compilation time can justify having >> parallel build. > > I'm not sure what you're advocating for here. > Can you rephrase/elaborate? Can we have a perf test case generator without using parallel build? and we can add building perf test cases in parallel in next step. I'd like to add new things gradually. If you think it isn't necessary to do things in these two steps, I am OK too. I don't have a strong opinion on this now. I'll take a look at your patch in details. --=20 Yao (=E9=BD=90=E5=B0=A7)