Reputation: 4743
Use case: In a pytest
test suite I have a @fixture
which raises exceptions if command line options for its configuration are missing. I've written a test for this fixture using xfail
:
import pytest
from <module> import <exception>
@pytest.mark.xfail(raises=<exception>)
def test_fixture_with_missing_options_raises_exception(rc_visard):
pass
However the output after running the tests does not state the test as passed but "xfailed" instead:
============================== 1 xfailed in 0.15 seconds ========================
In addition to that I am not able to test if the fixture
raises the exception for specific missing command line options.
Is there a better approach to do this? Can I mock the pytest
command line options somehow that I do not need to call specific tests via pytest --<commandline-option-a> <test-file-name>::<test-name>
.
Upvotes: 6
Views: 8160
Reputation: 66541
Suppose you have a simplified project with conftest.py
containing the following code:
import pytest
def pytest_addoption(parser):
parser.addoption('--foo', action='store', dest='foo', default='bar',
help='--foo should be always bar!')
@pytest.fixture
def foo(request):
fooval = request.config.getoption('foo')
if fooval != 'bar':
raise ValueError('expected foo to be "bar"; "{}" provided'.format(fooval))
It adds a new command line arg --foo
and a fixture foo
returning the passed arg, or bar
if not specified. If anything else besides bar
passed via --foo
, the fixture raises a ValueError
.
You use the fixture as usual, for example
def test_something(foo):
assert foo == 'bar'
Now let's test that fixture.
In this example, we need to do some simple refactoring first. Move the fixture and related code to some file called something else than conftest.py
, for example, my_plugin.py
:
# my_plugin.py
import pytest
def pytest_addoption(parser):
parser.addoption('--foo', action='store', dest='foo', default='bar',
help='--foo should be always bar!')
@pytest.fixture
def foo(request):
fooval = request.config.getoption('foo')
if fooval != 'bar':
raise ValueError('expected foo to be "bar"; "{}" provided'.format(fooval))
In conftest.py
, ensure the new plugin is loaded:
# conftest.py
pytest_plugins = ['my_plugin']
Run the existing test suite to ensure we didn't break anything, all tests should still pass.
pytester
pytest
provides an extra plugin for writing plugin tests, called pytester
. It is not activated by default, so you should do that manually. In conftest.py
, extend the plugins list with pytester
:
# conftest.py
pytest_plugins = ['my_plugin', 'pytester']
Once pytester
is active, you get a new fixture available called testdir
. It can generate and run pytest
test suites from code. Here's what our first test will look like:
# test_foo_fixture.py
def test_all_ok(testdir):
testdata = '''
def test_sample(foo):
assert True
'''
testconftest = '''
pytest_plugins = ['my_plugin']
'''
testdir.makeconftest(testconftest)
testdir.makepyfile(testdata)
result = testdir.runpytest()
result.assert_outcomes(passed=1)
It should be pretty obvious what happens here: we provide the tests code as string and testdir
will generate a pytest
project from it in some temporary directory. To ensure our foo
fixture is available in the generated test project, we pass it in the generated conftest
same way as we do in the real one. testdir.runpytest()
starts the test run, producing a result that we can inspect.
Let's add another test that checks whether foo
will raise a ValueError
:
def test_foo_valueerror_raised(testdir):
testdata = '''
def test_sample(foo):
assert True
'''
testconftest = '''
pytest_plugins = ['my_plugin']
'''
testdir.makeconftest(testconftest)
testdir.makepyfile(testdata)
result = testdir.runpytest('--foo', 'baz')
result.assert_outcomes(errors=1)
result.stdout.fnmatch_lines([
'*ValueError: expected foo to be "bar"; "baz" provided'
])
Here we execute the generated tests with --foo baz
and verify afterwards if one test ended with an error and the error output contains the expected error message.
Upvotes: 8