public inbox for frysk@sourceware.org
 help / color / mirror / Atom feed
* test case, exceptions, and tearDown
@ 2007-07-03 21:52 Andrew Cagney
  2007-07-03 23:05 ` Kris Van Hees
  0 siblings, 1 reply; 5+ messages in thread
From: Andrew Cagney @ 2007-07-03 21:52 UTC (permalink / raw)
  To: frysk

Just FYI,

Todays IRC had a bit of discussion regarding how a JUnit test is written 
(as a style thing). I've found the JUnit faq ( 
http://junit.sourceforge.net/doc/faq/faq.htm ), even though it is using 
version 4 code, makes for a good reference for how the authors intended 
the framework to be used.

Two answers I found useful:

> *How do I write a test that fails when an unexpected exception is 
> thrown?*
>
> Declare the exception in the |throws| clause of the test method and 
> don't catch the exception within the test method. Uncaught exceptions 
> will cause the test to fail with an error.
>
> The following is an example test that fails when the 
> |IndexOutOfBoundsException| is raised:
>
> |
>
>     @Test
>     public void testIndexOutOfBoundsExceptionNotRaised() 
>         throws IndexOutOfBoundsException {
>     
>         ArrayList emptyList = new ArrayList();
>         Object o = emptyList.get(0);
>     }
>       |
Notice how the exception doesn't need to be explicitly caught; instead 
the exception being thrown is interpreted as a FAIL.

and:

> *When are tests garbage collected?*
>
> /(Submitted by: Timothy Wall and Kent Beck)/
>
> By design, the tree of Test instances is built in one pass, then the 
> tests are executed in a second pass. The test runner holds strong 
> references to all Test instances for the duration of the test 
> execution. This means that for a very long test run with many Test 
> instances, none of the tests may be garbage collected until the end of 
> the entire test run.
>
> Therefore, if you allocate external or limited resources in a test, 
> you are responsible for freeing those resources. Explicitly setting an 
> object to |null| in the |tearDown()| method, for example, allows it to 
> be garbage collected before the end of the entire test run.
>
Things involving file-descriptors would certainly fall into that 
category.  Something to keep an eye out for :-)


Andrew

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: test case, exceptions, and tearDown
  2007-07-03 21:52 test case, exceptions, and tearDown Andrew Cagney
@ 2007-07-03 23:05 ` Kris Van Hees
  2007-07-05 15:52   ` Andrew Cagney
  0 siblings, 1 reply; 5+ messages in thread
From: Kris Van Hees @ 2007-07-03 23:05 UTC (permalink / raw)
  To: Andrew Cagney; +Cc: frysk

While I agree in principle with the things you quote below, and in
general with most of the junit FAQ (as I have for many years), I do also
believe that we shouldn't necessarily blindly follow whatever is stated
in the FAQ.  I mean that in very generic terms: the purpose of having a
testsuite is to provide us with clear, consistent, and above all useful
results.

One thing I have always had an issue with concerning junit is the
blending of ERROR and FAIL in junit 4.  I believe that the distinction
between a failed test (assertions failed or expected failure mode (e.g.
exception) occcured) and a test execution that failed (unexpected event
caused the test to not complete as expected, i.e. something made it
impossible to make a correct PASS/FAIL determination).  In that sense, I
do not think it to be wise to always take the strict rule of just
letting exceptions bubble up to the framework.  To me, the most
important word in the FAQ about this topic is "unexpected".  If there is
an exception that is known to validly occur in a failure scenario (it
makes the test fail to satisfy it assertions) then that should be
reflected in the result differently from an unexpected exception that
interferes with the test execution.

There are obviously different schools of thought, but given that Frysk
is already using 2 testing frameworks that are set up to return results
that fall within the POSIX categories (as described in the dejagnu
documentation as reference - discussed in past months), it would make
sense that we make use of the full spectrum of result messages in a
meaningful way.

	Cheers,
	Kris
On Tue, Jul 03, 2007 at 05:52:24PM -0400, Andrew Cagney wrote:
> Just FYI,
> 
> Todays IRC had a bit of discussion regarding how a JUnit test is written 
> (as a style thing). I've found the JUnit faq ( 
> http://junit.sourceforge.net/doc/faq/faq.htm ), even though it is using 
> version 4 code, makes for a good reference for how the authors intended 
> the framework to be used.
> 
> Two answers I found useful:
> 
> >*How do I write a test that fails when an unexpected exception is 
> >thrown?*
> >
> >Declare the exception in the |throws| clause of the test method and 
> >don't catch the exception within the test method. Uncaught exceptions 
> >will cause the test to fail with an error.
> >
> >The following is an example test that fails when the 
> >|IndexOutOfBoundsException| is raised:
> >
> >|
> >
> >    @Test
> >    public void testIndexOutOfBoundsExceptionNotRaised() 
> >        throws IndexOutOfBoundsException {
> >    
> >        ArrayList emptyList = new ArrayList();
> >        Object o = emptyList.get(0);
> >    }
> >      |
> Notice how the exception doesn't need to be explicitly caught; instead 
> the exception being thrown is interpreted as a FAIL.
> 
> and:
> 
> >*When are tests garbage collected?*
> >
> >/(Submitted by: Timothy Wall and Kent Beck)/
> >
> >By design, the tree of Test instances is built in one pass, then the 
> >tests are executed in a second pass. The test runner holds strong 
> >references to all Test instances for the duration of the test 
> >execution. This means that for a very long test run with many Test 
> >instances, none of the tests may be garbage collected until the end of 
> >the entire test run.
> >
> >Therefore, if you allocate external or limited resources in a test, 
> >you are responsible for freeing those resources. Explicitly setting an 
> >object to |null| in the |tearDown()| method, for example, allows it to 
> >be garbage collected before the end of the entire test run.
> >
> Things involving file-descriptors would certainly fall into that 
> category.  Something to keep an eye out for :-)
> 
> 
> Andrew
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: test case, exceptions, and tearDown
  2007-07-03 23:05 ` Kris Van Hees
@ 2007-07-05 15:52   ` Andrew Cagney
  2007-07-05 16:28     ` Kris Van Hees
  0 siblings, 1 reply; 5+ messages in thread
From: Andrew Cagney @ 2007-07-05 15:52 UTC (permalink / raw)
  To: Kris Van Hees; +Cc: frysk

Kris,

You rase several interesting points; and as you note there are different 
schools of thought.  Can you point me at the JUnit thread where you 
raised this issue?  I'd be interested in reading the discussion.

JUnit gives us a common framework, and a set of conventions, that is 
proving more than sufficient to our needs.  The only real stumbling 
block we've encountered is that JUnit holds to Java's underlying 
assumption that you yoour test can be written-once and run-everywhere.  
Frysk, being system dependent, doesn't have that luxury and so needs 
ways to identify tests that can't work or have problems in specific 
circumstances; and for that we've kludged up a work-around that draws on 
POSIX and its definition of UNSUPPORTED and UNRESOLVED.

Given the success we're seeing with developers adding JUnit tests, I 
consider this more than sufficient.

Andrew


Kris Van Hees wrote:
> While I agree in principle with the things you quote below, and in
> general with most of the junit FAQ (as I have for many years), I do also
> believe that we shouldn't necessarily blindly follow whatever is stated
> in the FAQ.  I mean that in very generic terms: the purpose of having a
> testsuite is to provide us with clear, consistent, and above all useful
> results.
>
> One thing I have always had an issue with concerning junit is the
> blending of ERROR and FAIL in junit 4.  I believe that the distinction
> between a failed test (assertions failed or expected failure mode (e.g.
> exception) occcured) and a test execution that failed (unexpected event
> caused the test to not complete as expected, i.e. something made it
> impossible to make a correct PASS/FAIL determination).  In that sense, I
> do not think it to be wise to always take the strict rule of just
> letting exceptions bubble up to the framework.  To me, the most
> important word in the FAQ about this topic is "unexpected".  If there is
> an exception that is known to validly occur in a failure scenario (it
> makes the test fail to satisfy it assertions) then that should be
> reflected in the result differently from an unexpected exception that
> interferes with the test execution.
>
> There are obviously different schools of thought, but given that Frysk
> is already using 2 testing frameworks that are set up to return results
> that fall within the POSIX categories (as described in the dejagnu
> documentation as reference - discussed in past months), it would make
> sense that we make use of the full spectrum of result messages in a
> meaningful way.
>   

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: test case, exceptions, and tearDown
  2007-07-05 15:52   ` Andrew Cagney
@ 2007-07-05 16:28     ` Kris Van Hees
  2007-07-05 18:04       ` Andrew Cagney
  0 siblings, 1 reply; 5+ messages in thread
From: Kris Van Hees @ 2007-07-05 16:28 UTC (permalink / raw)
  To: Andrew Cagney; +Cc: Kris Van Hees, frysk

I have not raised these issues in any JUnit thrad, nor do I believe
there is much point in doing so.  The JUnit developers have stated their
opinion rather strongly on these issues, and I honestly do not believe
they are open to changing their mind on this.  I had numerous
conversations without organizations about this topic, and the feelings
on both sides of the issue seem to be rather strong.

Sorry that there has not been a more public discussion about this, that
I am aware of.

If a lack of distinction between a test failing due to external
influences (or being non-deterministic) and a recognized exception
triggering a failure is acceptable, the suggested use of junit is of
course sufficient.  Given our situation (needing to deal with more
system specific details), and as you say not being able to depend on a
write-one-run-anywhere situation, I don't think we can simply stick to
the overall recommendations by the junit team.  After all, we are *not*
working within their ideal environment.

Anyway, given that there are clearly two positions on this topic,
majority opinion ought to drive the final decision.  If we are to go
with a folding of ERROR and FAIL, I do want to suggest though that
failing tests are given more attention in view of such failure
potentially being a result of a true problem with the test itself rather
than a reflection of the implemented assertions not passing.

	Cheers,
	Kris

On Thu, Jul 05, 2007 at 11:51:49AM -0400, Andrew Cagney wrote:
> Kris,
> 
> You rase several interesting points; and as you note there are different 
> schools of thought.  Can you point me at the JUnit thread where you 
> raised this issue?  I'd be interested in reading the discussion.
> 
> JUnit gives us a common framework, and a set of conventions, that is 
> proving more than sufficient to our needs.  The only real stumbling 
> block we've encountered is that JUnit holds to Java's underlying 
> assumption that you yoour test can be written-once and run-everywhere.  
> Frysk, being system dependent, doesn't have that luxury and so needs 
> ways to identify tests that can't work or have problems in specific 
> circumstances; and for that we've kludged up a work-around that draws on 
> POSIX and its definition of UNSUPPORTED and UNRESOLVED.
> 
> Given the success we're seeing with developers adding JUnit tests, I 
> consider this more than sufficient.
> 
> Andrew
> 
> 
> Kris Van Hees wrote:
> >While I agree in principle with the things you quote below, and in
> >general with most of the junit FAQ (as I have for many years), I do also
> >believe that we shouldn't necessarily blindly follow whatever is stated
> >in the FAQ.  I mean that in very generic terms: the purpose of having a
> >testsuite is to provide us with clear, consistent, and above all useful
> >results.
> >
> >One thing I have always had an issue with concerning junit is the
> >blending of ERROR and FAIL in junit 4.  I believe that the distinction
> >between a failed test (assertions failed or expected failure mode (e.g.
> >exception) occcured) and a test execution that failed (unexpected event
> >caused the test to not complete as expected, i.e. something made it
> >impossible to make a correct PASS/FAIL determination).  In that sense, I
> >do not think it to be wise to always take the strict rule of just
> >letting exceptions bubble up to the framework.  To me, the most
> >important word in the FAQ about this topic is "unexpected".  If there is
> >an exception that is known to validly occur in a failure scenario (it
> >makes the test fail to satisfy it assertions) then that should be
> >reflected in the result differently from an unexpected exception that
> >interferes with the test execution.
> >
> >There are obviously different schools of thought, but given that Frysk
> >is already using 2 testing frameworks that are set up to return results
> >that fall within the POSIX categories (as described in the dejagnu
> >documentation as reference - discussed in past months), it would make
> >sense that we make use of the full spectrum of result messages in a
> >meaningful way.
> >  
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: test case, exceptions, and tearDown
  2007-07-05 16:28     ` Kris Van Hees
@ 2007-07-05 18:04       ` Andrew Cagney
  0 siblings, 0 replies; 5+ messages in thread
From: Andrew Cagney @ 2007-07-05 18:04 UTC (permalink / raw)
  To: Kris Van Hees; +Cc: frysk

Kris Van Hees wrote:
> Anyway, given that there are clearly two positions on this topic,
> majority opinion ought to drive the final decision.  If we are to go
> with a folding of ERROR and FAIL, I do want to suggest though that
> failing tests are given more attention in view of such failure
> potentially being a result of a true problem with the test itself rather
> than a reflection of the implemented assertions not passing.
>   
To true; for my part I've already closed out two Fedora 7 failures.

Andrew

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2007-07-05 18:04 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-07-03 21:52 test case, exceptions, and tearDown Andrew Cagney
2007-07-03 23:05 ` Kris Van Hees
2007-07-05 15:52   ` Andrew Cagney
2007-07-05 16:28     ` Kris Van Hees
2007-07-05 18:04       ` Andrew Cagney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).