From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 23959 invoked by alias); 30 May 2007 01:12:56 -0000 Received: (qmail 23949 invoked by uid 22791); 30 May 2007 01:12:55 -0000 X-Spam-Status: No, hits=-0.9 required=5.0 tests=AWL,BAYES_40,DK_POLICY_SIGNSOME X-Spam-Check-By: sourceware.org Received: from rgminet01.oracle.com (HELO rgminet01.oracle.com) (148.87.113.118) by sourceware.org (qpsmtpd/0.31) with ESMTP; Wed, 30 May 2007 01:12:54 +0000 Received: from rgmsgw02.us.oracle.com (rgmsgw02.us.oracle.com [138.1.186.52]) by rgminet01.oracle.com (Switch-3.2.4/Switch-3.1.6) with ESMTP id l4U1ChXw019687; Tue, 29 May 2007 19:12:44 -0600 Received: from ca-server1.us.oracle.com (ca-server1.us.oracle.com [139.185.48.5]) by rgmsgw02.us.oracle.com (Switch-3.2.4/Switch-3.2.4) with ESMTP id l4U1CgDC030072 (version=TLSv1/SSLv3 cipher=AES256-SHA bits=256 verify=NO); Tue, 29 May 2007 19:12:43 -0600 Received: from kvanhees by ca-server1.us.oracle.com with local (Exim 4.63) (envelope-from ) id 1HtCjy-00058G-9V; Tue, 29 May 2007 18:12:42 -0700 Date: Wed, 30 May 2007 12:07:00 -0000 From: Kris Van Hees To: Andrew Cagney Cc: frysk@sourceware.org Subject: Re: Automated build-and-test summary report (2007/05/23) Message-ID: <20070530011241.GB14523@ca-server1.us.oracle.com> References: <20070523141034.GA16276@ca-server1.us.oracle.com> <46545CB3.6000509@redhat.com> <20070523214658.GB16276@ca-server1.us.oracle.com> <4655CCF6.8080607@redhat.com> <20070524192441.GC16276@ca-server1.us.oracle.com> <465B262C.5090104@redhat.com> <465C6197.6030405@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <465C6197.6030405@redhat.com> User-Agent: Mutt/1.5.11 X-Brightmail-Tracker: AAAAAQAAAAI= X-Brightmail-Tracker: AAAAAQAAAAI= X-Whitelist: TRUE X-Whitelist: TRUE X-IsSubscribed: yes Mailing-List: contact frysk-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Post: List-Help: , Sender: frysk-owner@sourceware.org X-SW-Source: 2007-q2/txt/msg00218.txt.bz2 Pong. As you are probably aware, yesterday was a US holiday. I must not be getting your point... Why would you support the notion that automated builds and testing is a benefit (at least that is my impression based on conversations on #frysk and conf calls), while at the same time trying to pressure me into not posting the results. I have yet to find any testing methodology that promotes executing tests without presenting the results for investigation. Nor have I found any (yet) that promote executing tests and verifying the results at a much later time (like running tests all week, but only getting the results at the end of that week). And note that these are all *project* test-results. At the onset of getting preliminary results from the build-and-test system, various people stepped in to deal with some issues that were obvious (to the people working on it) once the summarized report was posted. That turned out to be rather low hanging fruit that simply had not been dealt with. The build system has also been exercising (on your suggestion, by the way) the state of building from a 'make dist' tree. We started doing that on Mar 29th, and to date there has not been a single successful build using that configuration, nor does it seem like anyone has actually even bothered to look into that problem. While I fully agree that testing that configuration is important, I am a bit concerned about you being the one to strongly suggest we should add that and at the same time not having said a word about it even though it has been failing for 2 months straight so far with no indication that it is about to improve. Throughout the months the system has been in development, and then became fully functioning, we suffered through quite a few iterations of finding kernel problems relating to utrace. More often than not, these were problems that others had not reported (either due to not testing on those configurations or otherwise). We got quite a bit of traction on that and largely due to Roland's work, the situation improved a whole lot. It is also very noticeable when you look at the today-vs-yesterday comparison reports that I have recently been posting on a daily basis (and that you clearly have a personal objection to), that there are quite a few tests that show intermittent failures (some are almost on an every-other-day cycle) in specific configurations. You have previously stated that no test should ever PASS part of the time, and FAIL (or be an ERROR) the rest of the time - that would constitute a "bad test". I fully agree. And that is the kind of information that these reports are bringing to the surface. These situations are one of the reasons to not just depend on developers running tests and fixing things that are broken. Note the current issue with pending USR1 signals? For you (as a developer) the problem occurred, you did a clean rebuild, and it went away. The build system exercises clean builds all the time, and clearly shows that testing as a developer is simply not sufficient to make a correct determination on whether a problem exists or not. So, please enlighten me: what is the *real* problem here? Cheers, Kris On Tue, May 29, 2007 at 01:23:35PM -0400, Andrew Cagney wrote: > Kris, ping. > > Andrew Cagney wrote: > >Kris, > > > >Again, I would really appreciate it if we didn't fill the list with > >your test-results. If the system must always post out its results > >then it unfortunatly sounds like the best solution is to limit them to > >once a week. Sigh. > > > >Andrew > > >