From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 19893 invoked by alias); 11 Mar 2004 10:51:59 -0000 Mailing-List: contact mauve-discuss-help@sources.redhat.com; run by ezmlm Precedence: bulk List-Subscribe: List-Archive: List-Post: List-Help: , Sender: mauve-discuss-owner@sources.redhat.com Received: (qmail 19784 invoked from network); 11 Mar 2004 10:51:56 -0000 Received: from unknown (HELO mailhub02.unibe.ch) (130.92.9.53) by sources.redhat.com with SMTP; 11 Mar 2004 10:51:56 -0000 Received: from localhost (scanhub02.unibe.ch [130.92.254.66]) by mailhub02.unibe.ch (Postfix) with ESMTP id B87DC7649A; Thu, 11 Mar 2004 11:51:55 +0100 (MET) Received: from mailhub02.unibe.ch ([130.92.9.53]) by localhost (scanhub02 [130.92.254.66]) (amavisd-new, port 10024) with LMTP id 14623-01-20; Thu, 11 Mar 2004 11:51:55 +0100 (CET) Received: from asterix.unibe.ch (asterix.unibe.ch [130.92.64.4]) by mailhub02.unibe.ch (Postfix) with ESMTP id 4373A764AC; Thu, 11 Mar 2004 11:51:53 +0100 (MET) Received: from iam.unibe.ch (howland [130.92.65.22]) by asterix.unibe.ch (8.11.7p1+Sun/8.11.7) with ESMTP id i2BApnb14784; Thu, 11 Mar 2004 11:51:52 +0100 (MET) Date: Thu, 11 Mar 2004 10:51:00 -0000 Subject: Re: [Q] Number of unit tests in Mauve and of assertions in Classpath? Content-Type: text/plain; delsp=yes; charset=US-ASCII; format=flowed Mime-Version: 1.0 (Apple Message framework v552) Cc: Stephen Crawley , To: Sascha Brawer From: =?ISO-8859-1?Q?Markus_G=E4lli?= In-Reply-To: <20040311093422.3949@smtp.mail.ch.easynet.net> Message-Id: <18C7F102-734A-11D8-B196-000A958C4F3C@iam.unibe.ch> Content-Transfer-Encoding: 7bit X-Virus-checked: by University of Berne X-SW-Source: 2004-q1/txt/msg00013.txt.bz2 Hi Sascha, >>> roughly 145 [Mauve tests] are currently failing. >> I guess this 145 are not independent, meaning, if you fixed one, many >> others would be fixed too. Right? > > In some cases, there exist multiple tests for the same functionality. > For > example the tests for solving quadratic and cubic equations, where the > testee is called with a bunch of somewhat random test equations. You'll > have the same situation with any tests that are feeding random input > data > into a function and check whether the result is as expected. Any > failures > would be sort of mutually dependent: As soon as you fix any of these > failures, the other failures are likely to disappear immediately > (because > it's usually the same code that is failing). Stated in more abstract > terms, the "fixing order" is only a partial, not a total ordering on > tests. Right. In my experience many tests are on the same level also. There you basically don't care with which you begin, as any exception will do. I think that you can get more out of the ones which form an inclusion hierarchy of coverage sets meaning that one test is concerned with a more abstract view than the other but both running the same inner parts. > >> I am just writing a paper stating that it makes sense to sort failing >> tests by size of covered methods beginning with the smallest. > > Interesting... Why would this be an improvement -- because you assume > that smaller methods get invoked by larger ones, and the smaller > methods > would be the specific point of failure? In that case, couldn't you just > sort the failed tests based on a (post-order) traversal of the observed > call graph? Or, this may be easier to obtain, based on the total number > of instructions/bytecodes executed by the testee and its callees? That was what I meant. I hope I am not that sloppy in my paper... ;-) It should have been: "The size of the sets of covered method signatures". > Well, I > guess the reason for sorting by method size will be stated in your > paper. > Would you mind posting a link to your paper on this list when you've > published it? Thanks in advance! You can read an old version here: http://www.iam.unibe.ch/~scg/Archive/Papers/ Gael03bPartialOrderingTestsByCoverageSets.pdf Will hopefully publish a published version Real Soon Now (TM) >> My motivation for asking here, was that if you had assertions (which I >> surely understand that you don't have), more of the failing >> tests would fail at the most specific assertion, making my sorting a >> bit more useless. > > This seems like a reasonable assumption, but I wouldn't know how to > falsify it. > >> But hey, you just gave me a wonderful argument: In some big and >> important projects, assertions are even not an option! > > Well, I guess we will eventually use assertions in Classpath. There's > no > law saying we must not use 1.4 language features. The reason why we > haven't been using assertions in the past is the lack of support by > free > Java compilers. Which would be easy enough to fix (a quick-and-dirty > fix > would be just to ignore the assert statement). Difficult to introduce that to Object? Could be made painless (in Smalltalk at least) See: http://www.smalltalkchronicles.net/edition2-1/st_compiler.htm (snip) > > Mauve simply takes a large text file that lists the names of test > classes, loads the respective class (which must implement the > gnu.testlet.Testlet interface) and invokes its "test" method. So the > "sorting" is just the ordering in the text file. If you sorted this > list > in any particular way, the output (= failing tests) would be sorted in > the same order. > >> - Do you know any (as responsive :-) java open source community with >> some big project and which >> uses both, JDK 1.4 (thus theoretically assertions) and unit tests? > > I guess most JUnit users would fall into that category, but I > unfortunately don't know any specific examples. > Thanks a lot. Cheers, Markus