Hi Andrew, On Tue, 2007-07-31 at 11:57 -0400, Andrew Cagney wrote: > Mark Wielaard wrote: > > On Mon, 2007-07-30 at 11:47 -0400, Andrew Cagney wrote: > > > >> I'm looking at ways to more directly test the frysk.testbed.Funit* > >> classes (e.g., FunitExec, DetachedAckProcess) that wrap > >> PKGLIBDIR/funit-* utilities, but am finding that the most effective > >> route is to use frysk.proc's framework - duplicating the existing > >> frysk.proc tests exercising frysk.proc functionality that is effectively > >> testing that code. > >> > >> I could duplicate the tests but it seems redundant. Any thoughts on a > >> strategy? > > > > I might be missing the exact cases you want to test. But can't you just > > audit the current frysk.proc tests to see if they cover all relevant > > cases already and if not add one or two tests to the existing proc tests > > so all cases are covered? That way you will also extend the real proc > > tests to handle more cases catching two birds with one stone (if that > > isn't a terribly political incorrect saying). > > > you describe the current state of play; frysk.proc code is testing both > itself internally and the funit tools implicitly. There's nothing > directly testing units such as FunitExecOffspring; instead it is done > implicitly via frysk.proc. That is great when it works, but not so > great when tests fail as differentiating between an FunitExecOffspring > or frysk.proc breakage that caused the fail isn't possible. Tests that not just cover the bare essentials, but test the code in actual use scenarios are very important. It makes sure that the code is tested as it will actually be used. And in this case, if you find something not covered, the proc code gets also more tests. As you say that is great if it works. But I get the feeling that is not enough for your current strategy. > As a contrasting example. Say we find the UI is crashing and track it > down to a dwarf binding bug. What we do is add a test-case to the dwarf > bindings testing the problem (which contains the root cause of the > problem); and then fix it. Is the core code is now being tested, we're > confident that our problem won't return. Is there an effective way to > do that here with the Funit* bindings? Right. That is what I am actually suggesting. Make sure that there are enough tests in proc that you feel confident that everything under funit is covered. Then if some issue is found anyway later on and it is tracked down to funit then just add an extra test there when you fix the problem so you can be confident it won't return. Besides that you have to fall back on your suggestion, slightly redundant, test duplication I am afraid. Cheers, Mark