1 | ======
|
---|
2 | Manual
|
---|
3 | ======
|
---|
4 |
|
---|
5 | Introduction
|
---|
6 | ------------
|
---|
7 |
|
---|
8 | This document provides overview of the features provided by testtools. Refer
|
---|
9 | to the API docs (i.e. docstrings) for full details on a particular feature.
|
---|
10 |
|
---|
11 | Extensions to TestCase
|
---|
12 | ----------------------
|
---|
13 |
|
---|
14 | Custom exception handling
|
---|
15 | ~~~~~~~~~~~~~~~~~~~~~~~~~
|
---|
16 |
|
---|
17 | testtools provides a way to control how test exceptions are handled. To do
|
---|
18 | this, add a new exception to self.exception_handlers on a TestCase. For
|
---|
19 | example::
|
---|
20 |
|
---|
21 | >>> self.exception_handlers.insert(-1, (ExceptionClass, handler)).
|
---|
22 |
|
---|
23 | Having done this, if any of setUp, tearDown, or the test method raise
|
---|
24 | ExceptionClass, handler will be called with the test case, test result and the
|
---|
25 | raised exception.
|
---|
26 |
|
---|
27 | Controlling test execution
|
---|
28 | ~~~~~~~~~~~~~~~~~~~~~~~~~~
|
---|
29 |
|
---|
30 | If you want to control more than just how exceptions are raised, you can
|
---|
31 | provide a custom `RunTest` to a TestCase. The `RunTest` object can change
|
---|
32 | everything about how the test executes.
|
---|
33 |
|
---|
34 | To work with `testtools.TestCase`, a `RunTest` must have a factory that takes
|
---|
35 | a test and an optional list of exception handlers. Instances returned by the
|
---|
36 | factory must have a `run()` method that takes an optional `TestResult` object.
|
---|
37 |
|
---|
38 | The default is `testtools.runtest.RunTest` and calls 'setUp', the test method
|
---|
39 | and 'tearDown' in the normal, vanilla way that Python's standard unittest
|
---|
40 | does.
|
---|
41 |
|
---|
42 | To specify a `RunTest` for all the tests in a `TestCase` class, do something
|
---|
43 | like this::
|
---|
44 |
|
---|
45 | class SomeTests(TestCase):
|
---|
46 | run_tests_with = CustomRunTestFactory
|
---|
47 |
|
---|
48 | To specify a `RunTest` for a specific test in a `TestCase` class, do::
|
---|
49 |
|
---|
50 | class SomeTests(TestCase):
|
---|
51 | @run_test_with(CustomRunTestFactory, extra_arg=42, foo='whatever')
|
---|
52 | def test_something(self):
|
---|
53 | pass
|
---|
54 |
|
---|
55 | In addition, either of these can be overridden by passing a factory in to the
|
---|
56 | `TestCase` constructor with the optional 'runTest' argument.
|
---|
57 |
|
---|
58 | TestCase.addCleanup
|
---|
59 | ~~~~~~~~~~~~~~~~~~~
|
---|
60 |
|
---|
61 | addCleanup is a robust way to arrange for a cleanup function to be called
|
---|
62 | before tearDown. This is a powerful and simple alternative to putting cleanup
|
---|
63 | logic in a try/finally block or tearDown method. e.g.::
|
---|
64 |
|
---|
65 | def test_foo(self):
|
---|
66 | foo.lock()
|
---|
67 | self.addCleanup(foo.unlock)
|
---|
68 | ...
|
---|
69 |
|
---|
70 | Cleanups can also report multiple errors, if appropriate by wrapping them in
|
---|
71 | a testtools.MultipleExceptions object::
|
---|
72 |
|
---|
73 | raise MultipleExceptions(exc_info1, exc_info2)
|
---|
74 |
|
---|
75 |
|
---|
76 | TestCase.addOnException
|
---|
77 | ~~~~~~~~~~~~~~~~~~~~~~~
|
---|
78 |
|
---|
79 | addOnException adds an exception handler that will be called from the test
|
---|
80 | framework when it detects an exception from your test code. The handler is
|
---|
81 | given the exc_info for the exception, and can use this opportunity to attach
|
---|
82 | more data (via the addDetails API) and potentially other uses.
|
---|
83 |
|
---|
84 |
|
---|
85 | TestCase.patch
|
---|
86 | ~~~~~~~~~~~~~~
|
---|
87 |
|
---|
88 | ``patch`` is a convenient way to monkey-patch a Python object for the duration
|
---|
89 | of your test. It's especially useful for testing legacy code. e.g.::
|
---|
90 |
|
---|
91 | def test_foo(self):
|
---|
92 | my_stream = StringIO()
|
---|
93 | self.patch(sys, 'stderr', my_stream)
|
---|
94 | run_some_code_that_prints_to_stderr()
|
---|
95 | self.assertEqual('', my_stream.getvalue())
|
---|
96 |
|
---|
97 | The call to ``patch`` above masks sys.stderr with 'my_stream' so that anything
|
---|
98 | printed to stderr will be captured in a StringIO variable that can be actually
|
---|
99 | tested. Once the test is done, the real sys.stderr is restored to its rightful
|
---|
100 | place.
|
---|
101 |
|
---|
102 |
|
---|
103 | TestCase.skipTest
|
---|
104 | ~~~~~~~~~~~~~~~~~
|
---|
105 |
|
---|
106 | ``skipTest`` is a simple way to have a test stop running and be reported as a
|
---|
107 | skipped test, rather than a success/error/failure. This is an alternative to
|
---|
108 | convoluted logic during test loading, permitting later and more localized
|
---|
109 | decisions about the appropriateness of running a test. Many reasons exist to
|
---|
110 | skip a test - for instance when a dependency is missing, or if the test is
|
---|
111 | expensive and should not be run while on laptop battery power, or if the test
|
---|
112 | is testing an incomplete feature (this is sometimes called a TODO). Using this
|
---|
113 | feature when running your test suite with a TestResult object that is missing
|
---|
114 | the ``addSkip`` method will result in the ``addError`` method being invoked
|
---|
115 | instead. ``skipTest`` was previously known as ``skip`` but as Python 2.7 adds
|
---|
116 | ``skipTest`` support, the ``skip`` name is now deprecated (but no warning
|
---|
117 | is emitted yet - some time in the future we may do so).
|
---|
118 |
|
---|
119 | TestCase.useFixture
|
---|
120 | ~~~~~~~~~~~~~~~~~~~
|
---|
121 |
|
---|
122 | ``useFixture(fixture)`` calls setUp on the fixture, schedules a cleanup to
|
---|
123 | clean it up, and schedules a cleanup to attach all details held by the
|
---|
124 | fixture to the details dict of the test case. The fixture object should meet
|
---|
125 | the ``fixtures.Fixture`` protocol (version 0.3.4 or newer). This is useful
|
---|
126 | for moving code out of setUp and tearDown methods and into composable side
|
---|
127 | classes.
|
---|
128 |
|
---|
129 |
|
---|
130 | New assertion methods
|
---|
131 | ~~~~~~~~~~~~~~~~~~~~~
|
---|
132 |
|
---|
133 | testtools adds several assertion methods:
|
---|
134 |
|
---|
135 | * assertIn
|
---|
136 | * assertNotIn
|
---|
137 | * assertIs
|
---|
138 | * assertIsNot
|
---|
139 | * assertIsInstance
|
---|
140 | * assertThat
|
---|
141 |
|
---|
142 |
|
---|
143 | Improved assertRaises
|
---|
144 | ~~~~~~~~~~~~~~~~~~~~~
|
---|
145 |
|
---|
146 | TestCase.assertRaises returns the caught exception. This is useful for
|
---|
147 | asserting more things about the exception than just the type::
|
---|
148 |
|
---|
149 | error = self.assertRaises(UnauthorisedError, thing.frobnicate)
|
---|
150 | self.assertEqual('bob', error.username)
|
---|
151 | self.assertEqual('User bob cannot frobnicate', str(error))
|
---|
152 |
|
---|
153 | Note that this is incompatible with the assertRaises in unittest2/Python2.7.
|
---|
154 | While we have no immediate plans to change to be compatible consider using the
|
---|
155 | new assertThat facility instead::
|
---|
156 |
|
---|
157 | self.assertThat(
|
---|
158 | lambda: thing.frobnicate('foo', 'bar'),
|
---|
159 | Raises(MatchesException(UnauthorisedError('bob')))
|
---|
160 |
|
---|
161 | There is also a convenience function to handle this common case::
|
---|
162 |
|
---|
163 | self.assertThat(
|
---|
164 | lambda: thing.frobnicate('foo', 'bar'),
|
---|
165 | raises(UnauthorisedError('bob')))
|
---|
166 |
|
---|
167 |
|
---|
168 | TestCase.assertThat
|
---|
169 | ~~~~~~~~~~~~~~~~~~~
|
---|
170 |
|
---|
171 | assertThat is a clean way to write complex assertions without tying them to
|
---|
172 | the TestCase inheritance hierarchy (and thus making them easier to reuse).
|
---|
173 |
|
---|
174 | assertThat takes an object to be matched, and a matcher, and fails if the
|
---|
175 | matcher does not match the matchee.
|
---|
176 |
|
---|
177 | See pydoc testtools.Matcher for the protocol that matchers need to implement.
|
---|
178 |
|
---|
179 | testtools includes some matchers in testtools.matchers.
|
---|
180 | python -c 'import testtools.matchers; print testtools.matchers.__all__' will
|
---|
181 | list those matchers.
|
---|
182 |
|
---|
183 | An example using the DocTestMatches matcher which uses doctests example
|
---|
184 | matching logic::
|
---|
185 |
|
---|
186 | def test_foo(self):
|
---|
187 | self.assertThat([1,2,3,4], DocTestMatches('[1, 2, 3, 4]'))
|
---|
188 |
|
---|
189 |
|
---|
190 | Creation methods
|
---|
191 | ~~~~~~~~~~~~~~~~
|
---|
192 |
|
---|
193 | testtools.TestCase implements creation methods called ``getUniqueString`` and
|
---|
194 | ``getUniqueInteger``. See pages 419-423 of *xUnit Test Patterns* by Meszaros
|
---|
195 | for a detailed discussion of creation methods.
|
---|
196 |
|
---|
197 |
|
---|
198 | Test renaming
|
---|
199 | ~~~~~~~~~~~~~
|
---|
200 |
|
---|
201 | ``testtools.clone_test_with_new_id`` is a function to copy a test case
|
---|
202 | instance to one with a new name. This is helpful for implementing test
|
---|
203 | parameterization.
|
---|
204 |
|
---|
205 |
|
---|
206 | Extensions to TestResult
|
---|
207 | ------------------------
|
---|
208 |
|
---|
209 | TestResult.addSkip
|
---|
210 | ~~~~~~~~~~~~~~~~~~
|
---|
211 |
|
---|
212 | This method is called on result objects when a test skips. The
|
---|
213 | ``testtools.TestResult`` class records skips in its ``skip_reasons`` instance
|
---|
214 | dict. The can be reported on in much the same way as succesful tests.
|
---|
215 |
|
---|
216 |
|
---|
217 | TestResult.time
|
---|
218 | ~~~~~~~~~~~~~~~
|
---|
219 |
|
---|
220 | This method controls the time used by a TestResult, permitting accurate
|
---|
221 | timing of test results gathered on different machines or in different threads.
|
---|
222 | See pydoc testtools.TestResult.time for more details.
|
---|
223 |
|
---|
224 |
|
---|
225 | ThreadsafeForwardingResult
|
---|
226 | ~~~~~~~~~~~~~~~~~~~~~~~~~~
|
---|
227 |
|
---|
228 | A TestResult which forwards activity to another test result, but synchronises
|
---|
229 | on a semaphore to ensure that all the activity for a single test arrives in a
|
---|
230 | batch. This allows simple TestResults which do not expect concurrent test
|
---|
231 | reporting to be fed the activity from multiple test threads, or processes.
|
---|
232 |
|
---|
233 | Note that when you provide multiple errors for a single test, the target sees
|
---|
234 | each error as a distinct complete test.
|
---|
235 |
|
---|
236 |
|
---|
237 | TextTestResult
|
---|
238 | ~~~~~~~~~~~~~~
|
---|
239 |
|
---|
240 | A TestResult that provides a text UI very similar to the Python standard
|
---|
241 | library UI. Key differences are that its supports the extended outcomes and
|
---|
242 | details API, and is completely encapsulated into the result object, permitting
|
---|
243 | it to be used without a 'TestRunner' object. Not all the Python 2.7 outcomes
|
---|
244 | are displayed (yet). It is also a 'quiet' result with no dots or verbose mode.
|
---|
245 | These limitations will be corrected soon.
|
---|
246 |
|
---|
247 |
|
---|
248 | Test Doubles
|
---|
249 | ~~~~~~~~~~~~
|
---|
250 |
|
---|
251 | In testtools.testresult.doubles there are three test doubles that testtools
|
---|
252 | uses for its own testing: Python26TestResult, Python27TestResult,
|
---|
253 | ExtendedTestResult. These TestResult objects implement a single variation of
|
---|
254 | the TestResult API each, and log activity to a list self._events. These are
|
---|
255 | made available for the convenience of people writing their own extensions.
|
---|
256 |
|
---|
257 |
|
---|
258 | startTestRun and stopTestRun
|
---|
259 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
---|
260 |
|
---|
261 | Python 2.7 added hooks 'startTestRun' and 'stopTestRun' which are called
|
---|
262 | before and after the entire test run. 'stopTestRun' is particularly useful for
|
---|
263 | test results that wish to produce summary output.
|
---|
264 |
|
---|
265 | testtools.TestResult provides empty startTestRun and stopTestRun methods, and
|
---|
266 | the default testtools runner will call these methods appropriately.
|
---|
267 |
|
---|
268 |
|
---|
269 | Extensions to TestSuite
|
---|
270 | -----------------------
|
---|
271 |
|
---|
272 | ConcurrentTestSuite
|
---|
273 | ~~~~~~~~~~~~~~~~~~~
|
---|
274 |
|
---|
275 | A TestSuite for parallel testing. This is used in conjuction with a helper that
|
---|
276 | runs a single suite in some parallel fashion (for instance, forking, handing
|
---|
277 | off to a subprocess, to a compute cloud, or simple threads).
|
---|
278 | ConcurrentTestSuite uses the helper to get a number of separate runnable
|
---|
279 | objects with a run(result), runs them all in threads using the
|
---|
280 | ThreadsafeForwardingResult to coalesce their activity.
|
---|
281 |
|
---|
282 |
|
---|
283 | Running tests
|
---|
284 | -------------
|
---|
285 |
|
---|
286 | testtools provides a convenient way to run a test suite using the testtools
|
---|
287 | result object: python -m testtools.run testspec [testspec...].
|
---|
288 |
|
---|
289 | To run tests with Python 2.4, you'll have to do something like:
|
---|
290 | python2.4 /path/to/testtools/run.py testspec [testspec ...].
|
---|
291 |
|
---|
292 |
|
---|
293 | Test discovery
|
---|
294 | --------------
|
---|
295 |
|
---|
296 | testtools includes a backported version of the Python 2.7 glue for using the
|
---|
297 | discover test discovery module. If you either have Python 2.7/3.1 or newer, or
|
---|
298 | install the 'discover' module, then you can invoke discovery::
|
---|
299 |
|
---|
300 | python -m testtools.run discover [path]
|
---|
301 |
|
---|
302 | For more information see the Python 2.7 unittest documentation, or::
|
---|
303 |
|
---|
304 | python -m testtools.run --help
|
---|
305 |
|
---|
306 |
|
---|
307 | Twisted support
|
---|
308 | ---------------
|
---|
309 |
|
---|
310 | Support for running Twisted tests is very experimental right now. You
|
---|
311 | shouldn't really do it. However, if you are going to, here are some tips for
|
---|
312 | converting your Trial tests into testtools tests.
|
---|
313 |
|
---|
314 | * Use the AsynchronousDeferredRunTest runner
|
---|
315 | * Make sure to upcall to setUp and tearDown
|
---|
316 | * Don't use setUpClass or tearDownClass
|
---|
317 | * Don't expect setting .todo, .timeout or .skip attributes to do anything
|
---|
318 | * flushLoggedErrors is not there for you. Sorry.
|
---|
319 | * assertFailure is not there for you. Even more sorry.
|
---|
320 |
|
---|
321 |
|
---|
322 | General helpers
|
---|
323 | ---------------
|
---|
324 |
|
---|
325 | Lots of the time we would like to conditionally import modules. testtools
|
---|
326 | needs to do this itself, and graciously extends the ability to its users.
|
---|
327 |
|
---|
328 | Instead of::
|
---|
329 |
|
---|
330 | try:
|
---|
331 | from twisted.internet import defer
|
---|
332 | except ImportError:
|
---|
333 | defer = None
|
---|
334 |
|
---|
335 | You can do::
|
---|
336 |
|
---|
337 | defer = try_import('twisted.internet.defer')
|
---|
338 |
|
---|
339 |
|
---|
340 | Instead of::
|
---|
341 |
|
---|
342 | try:
|
---|
343 | from StringIO import StringIO
|
---|
344 | except ImportError:
|
---|
345 | from io import StringIO
|
---|
346 |
|
---|
347 | You can do::
|
---|
348 |
|
---|
349 | StringIO = try_imports(['StringIO.StringIO', 'io.StringIO'])
|
---|