Tuesday, March 24, 2009

unittest - now with test skipping (finally)

Yesterday, I was happy to commit a patch which added test skipping and expected failure support to the venerable unittest module. It adds a skip() method to TestCase, which marks the current test being run as skipped, as well as a set of useful decorators. Here's a short example:

import sys
import unittest

class SkippingExample(unittest.TestCase):

@unittest.skip("testing skipping")
def test_skip_me(self):
self.fail("shouldn't happen")

def test_normal(self):
self.assertEqual(1, 1)

@unittest.skipIf(sys.version_info < (2, 6),
"not supported in this veresion")
def test_show_skip_if(self):
# testing some things here
pass

@unittest.expectedFailure
def test_expected_failure(self):
self.fail("this should happen unfortunately")


# Yes, you can skip whole classes, too!
@unittest.skip("classing skipping")
class CompletelySkippedTest(unittest.TestCase):

def test_not_run_at_all(self):
self.fail("shouldn't happen")


if __name__ == "__main__":
unittest.main()


Running it in verbose mode gives:


__main__.CompletelySkippedTest ... skipped 'classing skipping'
test_expected_failure (__main__.SkippingExample) ... expected failure
test_normal (__main__.SkippingExample) ... ok
test_show_skip_if (__main__.SkippingExample) ... ok
test_skip_me (__main__.SkippingExample) ... skipped 'testing skipping'

----------------------------------------------------------------------
Ran 5 tests in 0.010s

OK (skipped=2, expected failures=1)


I have high hopes for this and Python's regression tests. Hopefully it will simplify the ugly system of test skipping we have now. It should also help us pacify other implementations who want CPython implementation detail tests skipped.

11 comments:

Anonymous said...

Could you point at the original patch rationale etc? I just recently finished putting together skip support for testtools based on what bzr, twisted, nose etc all do for skip; and I want to make sure I can participate in other such discussions.

Benjamin Peterson said...

@lifeless
http://bugs.python.org/issue1034053

Noel O'Boyle said...

Fantastic! I was looking for this last week. It's great for differentiating unfixed bugs from regressions.

Anonymous said...

Thanks for that specific link. I meant - how can I find out about all unittest patches - I mean, I have a bunch of factored out extensions to unittest I'd love to see land, and I want to participate in all discussions about them ;).

Also, your patch breaks twisted's test runner, so all tests are skipped in it, and this would have been easy to avoid.

Benjamin Peterson said...

@lifeless
If you want to contribute patches to unittest, you can always leave them on the tracker [1], and bring them up on Python-dev.

The Twisted problem is that twisted skips tests if TestCase has a skip attribute. [2] It now has that method, so the test just needs to become more fine grained.


[1] http://bugs.python.org
[2] http://twistedmatrix.com/trac/ticket/3703

Marius Gedminas said...

Is the SkipTest exception part of the API, or an implementation detail? Lack of an underscore hints at the former, but lack of mention in the documentation hints at the latter.

I'm also wondering about doctests. I suppose the following should work:

>>> if not some_condition:
... raise unittest.SkipTest('need XYZ')

if you add it at the top of every doctest you want to (conditionally) skip.

Anonymous said...

@benjamin I think you're missing my point.

I want to be able to discuss - to help improve the review and quality - any and all patches to unittest.py. I've written tonnes of extensions to unittest - subunit, testresources, much of the bzr test suite, helped with the trial refactoring that got it playing as nicely with unittest today, and and contributing to jml's testtools as well. I'm on python-dev and the testing-in-python list and I saw _no_ discussion of this patch for 2.7 in either list; there are questions about its quality [see the lack of docs, the oddness of ClassTestSuite, the collision with trial (which I hear is fixed now - most excellent)] and more importantly why you are reinventing a solution rather than picking up one of the many done by unittest extensions - which I think *all* of those of us writing such extensions would love to see put into unittest, except that proposals to alter unittest tend to end up in contentious discussions and failed PEP's, so we were feeling sad about the chances of doing this.

In short, I think its great unittest is getting stuff done to it, but how can we actively be involved? Tracking every single patch in the python patch tracker isn't a great approach.

And yes, I'd be happy to put patches up for review. What would you like first? Parallel testing? Test result wire serialisation? feature selection? test loading hooks to aid discovery? test parameterisation [aids interface testing and avoiding test skew] ? slick runners with progress bars etc?

-Rob

Anonymous said...

As Rob points out there are forums for discussing potential patches to unittest, such as python-dev, python-ideas, and testing-in-python, but patch discussions in those seem to go nowhere productive, perhaps due to simply too many people with different opinions getting involved.

But just having stuff suddenly land in Python's trunk with no chance to participate (from my perspective at least) seems like an equally bad extreme, because by the time a patch has landed there the author quite naturally will be somewhat emotionally invested in that particular design and implementation, and the momentum will be against making changes to it.

I'm not sure where the good middle ground is, but it would be good to at least try to find it. I don't think just saying "put patches on the tracker" is that middle ground, but I guess it's a start.

It does seem to be a shame that there is a wealth of interest and experience in testing in Python (as evidenced by e.g. the existence of the testing-in-python list), but (so far at least) no way to harness that interest and experience effectively to benefit unittest.py in the standard library.

Benjamin Peterson said...

@lifeless, spiv

I understand your position. Everyone agrees that unittest could use some improvement, but the wider discussions that it requires always descend to bikeshedding quickly. It may have been foolish of me, but I almost consider this a move to actually "get something done". There's still time, though; features don't need to be frozen for a few months. I'd love to address the issues you have with the current implementation.

(Are you at PyCon?)

Anonymous said...

No, neither of us is at PyCon this year, unfortunately. It would be much easier to have this conversation (and maybe get some cool changes done!) in person :(

I was at PyCon last year, though... do you have a time machine handy?

Nagaraj said...

How to skip tests based on the result of the previous test in unittest framework..?

http://paste.ofcode.org/QnQGqiuh2ZsH9TtfnH3Q83
If I'm using the skipIf decorator and the condition for one test in the skipIf depends on the output of the previous one then how do I handle this..?
In the pastebin code I've shared I wish the second test to be skipped, but that's not happening..

#This is my code.

import unittest

class MyTestCase(unittest.TestCase):

Flag = True

@unittest.skipIf(Flag == False, 'Skipping Test')
def test_1(self):
print "Test 1"
MyTestCase.Flag = False
print MyTestCase.Flag

@unittest.skipIf(Flag == False, 'Skipping Test')
def test_2(self):
print "Test 2"

if __name__ == '__main__':
unittest.main()