From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by sourceware.org (Postfix) with ESMTP id 2A78C3971C64 for ; Fri, 18 Sep 2020 16:16:29 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 2A78C3971C64 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-122-Hswx6QVSPGuIMUufGGsN7Q-1; Fri, 18 Sep 2020 12:16:27 -0400 X-MC-Unique: Hswx6QVSPGuIMUufGGsN7Q-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0CDF81017DD6; Fri, 18 Sep 2020 16:16:26 +0000 (UTC) Received: from theo.uglyboxes.com (ovpn-113-168.phx2.redhat.com [10.3.113.168]) by smtp.corp.redhat.com (Postfix) with ESMTPS id A92B77B7BC; Fri, 18 Sep 2020 16:16:25 +0000 (UTC) Subject: Re: Initial findings of bunsen performance and questions To: Serhei Makarov References: <30950cc2-5d7f-eb93-42b1-d1c7a9138e81@redhat.com> From: Keith Seitz Cc: bunsen@sourceware.org Message-ID: <9709c97e-bb12-48dd-2c5c-a9efb35e55d1@redhat.com> Date: Fri, 18 Sep 2020 09:16:25 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.11.0 MIME-Version: 1.0 In-Reply-To: X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Content-Language: en-US X-Spam-Status: No, score=-10.3 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, NICE_REPLY_A, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H5, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: bunsen@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Bunsen mailing list List-Unsubscribe: , List-Archive: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 18 Sep 2020 16:16:30 -0000 On 9/16/20 4:09 PM, Serhei Makarov wrote: > On Wed, Sep 16, 2020, at 6:18 PM, Keith Seitz via Bunsen wrote: >> The first question to ask is: Is this an oversimplified/naive implementation >> of this script? > It looks right to me. I briefly suspected the lambda might do some additional > object copying, but that doesn't seem to be the case. I profiled the code at one time to ascertain where all the time was being spent. I don't have the specific results handy, but IIRC, most of the time was being spent in Testrun.__init__, specifically: for field in defer_fields: self[field] = self._deserialize_testrun_field(field, self[field], cursor_commit_ids) > I suspect my code for building the repo has some bug when using consolidate_pass=False. > Could you place the Git/JSON repo you built somewhere I have access to? Yes, I will send you (privately) what I've been benchmarking. This is nearly unaltered bunsen repo, but I've added a few simple patches to fix some existing problems. > (Also, you could try the +diff_runs script on your repo. If the JSON > parsing is the source of the slowdown and reading one run took 20s, > logically reading two runs would take you 40s.) That is exactly what happens: $ time ./bunsen.py +diff_runs 42563db d2a72bd ... real 0m57.300s user 0m49.599s sys 0m17.683s > IMO the comparison has to be done with 100s to 1000s of similar test runs > since Git's de-duplication must be compared to whatever SQLite does, > at that scale of data. > I doubt it's important though, for this use case we have disk space to burn > and the query speedup even justifies keeping both forms of storage. Indeed -- the sample space is too small, but I just wanted to get a feeling of how this is going. >> Is this an approach that seems viable as a supplement (replacement?) >> to JSON output? Is this approach something worth pursuing? > Definitely worth pursuing due to the aforementioned possibility of 'column queries' > which I don't see any way of handling well with the design I currently have. > > I'm not sure if SQLite is better used as a replacement for the JSON/Git storage > or as a supplemental cache built from it and used to speed up queries. > (Also, the original log files must be retained in any case.) All my current proof-of-concept does is replace the *test* data .JSON -> .db. So there are still JSON files that are used to describe the Testrun metadata, e.g. I haven't attempted to change that since I haven't fully investigated it. Right now, my plan is to make this an optional/configurable option. It really only needs to be configured when data is imported. We should be able to otherwise handle this transparently behind the scenes. So I will just continue on my way and try to get something review-ready. Thank you for your input! Keith