Reputation: 697
I generally use the pytest.mark.parametrize
decorator when writing unit
tests. It occurred to me that when testing functions that raise exceptions, I
could do something like the following:
bar.py
:
def foo(n: int, threshold: int = 1) -> int:
if n >= threshold:
return n
else:
raise ValueError(f'n: {n} < {threshold} (threshold)')
test_bar.py
:
import pytest
from bar import foo
test_foo_data = [
(1, {}, 1),
(0, {'threshold': 0}, 0),
(0, {}, None),
(1, {'threshold': 2}, None),
]
@pytest.mark.parametrize('n, params, expectation',
test_foo_data)
def test_foo(n, params, expectation):
if expectation is not None:
assert foo(n, **params) == expectation
else:
with pytest.raises(ValueError):
foo(n, **params)
results:
$ py.test
============================= test session starts ==============================
platform darwin -- Python 3.7.6, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /path/to/scripts
plugins: cov-2.9.0, hypothesis-5.19.0, quickcheck-0.8.4
collected 4 items
test_bar.py .... [100%]
============================== 4 passed in 0.02s ===============================
(I wouldn't necessarily check against None
to switch between assert
and
with pytest.raises ...
, but in this case, it seemed straightforward.) The
question is just because I can do this, should I do it, or would it be
better to write separate test functions for inputs that do/don't raise
exceptions. The latter strikes me as slightly more tedious, but I'm unsure of
best practices in this case.
Upvotes: 3
Views: 3560
Reputation: 1494
I think this is good practice. The reason is that it cuts down on code duplication. Your example doesn't show this, but real parameterized tests often have to do some amount of setup work before getting to the actual test (e.g. instantiating valid input objects from simplified parameters). If you split the valid and invalid inputs into separate test functions, it's hard to avoid duplicating this setup code.
That said, I think there's a cleaner way to write this kind of test. The basic idea is to directly use either pytest.raises()
or nullcontext()
as a parameter. This simplifies the test itself (no more if-statement) and makes it possible to test for different kinds of exceptions (or different exception messages—an overlooked source of bugs). A small complication is that this requires the expected return value to be its own parameter. Since it never makes sense to specify a return value and an exception, I recommend writing some simple helper functions to fill in whichever parameter is implied. The expected return value for error cases doesn't really matter, but I like to use Mock()
because it won't cause problems if the test needs to treat it like some other kind of object.
from contextlib import nullcontext
from unittest.mock import Mock
def expected(x):
return x, nullcontext()
def error(*args, **kwargs):
return Mock(), pytest.raises(*args, **kwargs)
test_foo_names = 'n, params, expected, error'
test_foo_data = [
(1, {}, *expected(1)),
(0, {'threshold': 0}, *expected(0)),
(0, {}, *error(ValueError, match="0 < 1")),
(1, {'threshold': 2}, *error(ValueError, match="0 < 2")),
]
@pytest.mark.parametrize(test_foo_names, test_foo_data)
def test_foo(n, params, expected, error):
with error:
assert foo(n, **params) == expected
Edit: The pytest documentation describes an idea pretty similar to this.
Upvotes: 3
Reputation: 524
The best practice for tests (in every language btw) is that every test should test one and only thing.
Every single test should be specific. You can know if your test is specific enough by the name of the test. If you fall into a test name like "test_foo_works" (like your test) you can infer that this test is too generic. Therefore needs to be split.
Your test can be splited into "test_valid_foo_inputs" and "test_invalid_foo_inputs"
In your example, you abused the parametrize
annotation.
Pytest provides the parametrize
for multiple parameters of the same purpose. (Like multiple valid/invalid foo inputs).
Upvotes: 1