Reputation: 10863
I have a number of projects where I use the pytest.mark.xfail
marker to mark tests that fail but shouldn't fail so that a failing test case can be added before the issue is fixed. I do not want to skip these tests, because if something I does causes them to start passing, I want to be informed of that so that I can remove the xfail
marker to avoid regressions.
The problem is that because xfail
tests actually run until they fail, any lines hit leading up to the failure are counted as "covered", even if they are part of no passing test, which gives me misleading metrics about how much of my code is actually tested as working. A minimal example of this is:
pkg.py
def f(fail):
if fail:
print("This line should not be covered")
return "wrong answer"
return "right answer"
test_pkg.py
import pytest
from pkg import f
def test_success():
assert f(fail=False) == "right answer"
@pytest.mark.xfail
def test_failure():
assert f(fail=True) == "right answer"
Running python -m pytest --cov=pkg
, I get:
platform linux -- Python 3.7.1, pytest-3.10.0, py-1.7.0, pluggy-0.8.0
rootdir: /tmp/cov, inifile:
plugins: cov-2.6.0
collected 2 items
tests/test_pkg.py .x [100%]
----------- coverage: platform linux, python 3.7.1-final-0 -----------
Name Stmts Miss Cover
----------------------------
pkg.py 5 0 100%
As you can see, all five lines are covered, but lines 3 and 4 are only hit during the xfail
test.
The way I handle this now is to set up tox
to run something like pytest -m "not xfail" --cov && pytest -m xfail
, but in addition to being a bit cumbersome, that is only filtering out things with the xfail
mark, which means that conditional xfails also get filtered out, regardless of whether or not the condition is met.
Is there any way to have coverage
or pytest
not count coverage from failing tests? Alternatively, I would be OK with a mechanism to ignore coverage from xfail
tests that only ignores conditional xfail
tests if the condition is met.
Upvotes: 2
Views: 1678
Reputation: 66231
Since you're using the pytest-cov
plugin, take advantage of its no_cover
marker. When annotated with pytest.mark.no_cover
, the code coverage will be turned off for the test. The only thing left to implement is applying no_cover
marker to all tests marked with pytest.mark.xfail
. In your conftest.py
:
import pytest
def pytest_collection_modifyitems(items):
for item in items:
if item.get_closest_marker('xfail'):
item.add_marker(pytest.mark.no_cover)
Running your example will now yield:
$ pytest --cov=pkg -v
=================================== test session starts ===================================
platform darwin -- Python 3.7.1, pytest-3.9.1, py-1.7.0, pluggy-0.8.0
cachedir: .pytest_cache
rootdir: /Users/hoefling/projects/private/stackoverflow, inifile:
plugins: cov-2.6.0
collected 2 items
test_pkg.py::test_success PASSED [ 50%]
test_pkg.py::test_failure xfail [100%]
---------- coverage: platform darwin, python 3.7.1-final-0 -----------
Name Stmts Miss Cover
----------------------------
pkg.py 5 2 60%
=========================== 1 passed, 1 xfailed in 0.04 seconds ===========================
xfail
markerThe marker arguments can be accessed via marker.args
and marker.kwargs
, so if you e.g. have a marker
@pytest.mark.xfail(sys.platform == 'win32', reason='This fails on Windows')
access the arguments with
marker = item.get_closest_marker('xfail')
condition = marker.args[0]
reason = marker.kwargs['reason']
To consider the condition flag, the hook from above can be modified as follows:
def pytest_collection_modifyitems(items):
for item in items:
marker = item.get_closest_marker('xfail')
if marker and (not marker.args or marker.args[0]):
item.add_marker(pytest.mark.no_cover)
Upvotes: 3