Aran-Fey
Aran-Fey

Reputation: 43146

Using pytest's parametrize, how can I skip the remaining tests if one test case fails?

I'm using pytest.mark.parametrize to feed increasingly long inputs into a rather slow test function, like so:

@pytest.mark.parametrize('data', [
    b'ab',
    b'xyz'*1000,
    b'12345'*1024**2,
    ... # etc
])
def test_compression(data):
    ... # compress the data
    ... # decompress the data
    assert decompressed_data == data

Because compressing large amounts of data takes a long time, I'd like to skip all the remaining tests after one fails. For example if the test fails with the input b'ab' (the first one), b'xyz'*1000 and b'12345'*1024**2 and all other parametrizations should be skipped (or xfail without being executed).

I know it's possible to attach marks to individual parametrizations like so:

@pytest.mark.parametrize("test_input,expected", [
    ("3+5", 8),
    ("2+4", 6),
    pytest.param("6*9", 42, marks=pytest.mark.xfail),
])

But I don't know how I could conditionally apply those marks depending on the status of the previous test case. Is there a way to do this?

Upvotes: 8

Views: 3942

Answers (3)

Roman Solyanik
Roman Solyanik

Reputation: 26

The accepted answer is not valid anymore, item.session doesn't have failednames param.

Inspired by https://docs.pytest.org/en/latest/example/simple.html#incremental-testing-test-steps

Just one more solution that is very close to the accepted one:

conftest.py

MARKER_SKIPREST = "skiprest"
MARKER_PARAMETRIZE = "parametrize"

_failed_parametrized_tests = set()  # History of failed tests

def pytest_runtest_makereport(item, call):
    """Memorizes names of failed Parametrized tests"""
    marker_names = {marker.name for marker in item.iter_markers()}
    if {MARKER_SKIPREST, MARKER_PARAMETRIZE}.issubset(marker_names):
        if call.excinfo is not None:
            _failed_parametrized_tests.add(item.originalname)


def pytest_runtest_setup(item):
    """Check if the test has already failed with other param.
    If yes - xfail this test"""
    marker_names = {marker.name for marker in item.iter_markers()}
    if {MARKER_SKIPREST, MARKER_PARAMETRIZE}.issubset(marker_names):
        if item.originalname in _failed_parametrized_tests:
            pytest.xfail("Previous test failed")

setup.cfg

[tool:pytest]
markers =
    skiprest: Skip rest of params in parametrized test if one test one of the tests in the sequence has failed.

test_?.py

pytest.mark.skiprest
pytest.mark.parametrize('data', [
    b'ab',
    b'xyz'*1000,
    b'12345'*1024**2,
    ... # etc
])
def test_compression(data):
   ...

Upvotes: 0

rocksportrocker
rocksportrocker

Reputation: 7419

EDIT: I adapted my solution to the accepted one by usint pytest.skip in the fixture.

The following hack could solve your problem. The fake test function test_compression fails always, but is only executed once. But it requires that you add the check of alrady_failed fixture in your test function:

import pytest


@pytest.fixture(scope="function")
def skip_if_already_failed(request, failed=set()):
    key = request.node.name.split("[")[0]
    failed_before = request.session.testsfailed
    if key in failed:
        pytest.skip("previous test {} failed".format(key))
    yield
    failed_after = request.session.testsfailed
    if failed_before != failed_after:
        failed.add(key)


@pytest.mark.parametrize("data", [1, 2, 3, 4, 5, 6])
def test_compression(data, skip_if_already_failed):
    assert data < 3

And this is the output:

$ py.test -v sopytest.py
================================== test session starts ==================================
platform darwin -- Python 3.6.6, pytest-3.8.0, py-1.6.0, pluggy-0.7.1 -- ...
cachedir: .pytest_cache
rootdir: ..., inifile:
collected 6 items

sopytest.py::test_compression[1] PASSED                                           [ 16%]
sopytest.py::test_compression[2] PASSED                                           [ 33%]
sopytest.py::test_compression[3] FAILED                                           [ 50%]
sopytest.py::test_compression[4] SKIPPED                                          [ 66%]
sopytest.py::test_compression[5] SKIPPED                                          [ 83%]
sopytest.py::test_compression[6] SKIPPED                                          [100%]

======================================= FAILURES ========================================
__________________________________ test_compression[3] __________________________________

data = 3, skip_if_already_failed = None

    @pytest.mark.parametrize("data", [1, 2, 3, 4, 5, 6])
    def test_compression(data, skip_if_already_failed):
>       assert data < 3
E       assert 3 < 3

sopytest.py:18: AssertionError
===================== 1 failed, 2 passed, 3 skipped in 0.08 seconds =====================

Upvotes: 4

hoefling
hoefling

Reputation: 66231

Marks are evaluated before the tests are executed, so there's no way to pass some kind of declarative mark (like skipif) that depends on other tests results. You can apply custom test skipping logic in hooks, though. Modifying the incremental testing - test steps recipe from pytest docs:

# conftest.py
import pytest

def pytest_sessionstart(session):
    session.failednames = set()

def pytest_runtest_makereport(item, call):
    if call.excinfo is not None:
        item.session.failednames.add(item.originalname)

def pytest_runtest_setup(item):
    if item.originalname in item.session.failednames:
        pytest.skip("previous test failed (%s)" % item.name)  # or use pytest.xfail like in the other answer

Example test

@pytest.mark.parametrize('i', range(10))
def test_spam(i):
    assert i != 3

yields:

=================================== test session starts ===================================
collected 10 items

test_spam.py::test_spam[0] PASSED
test_spam.py::test_spam[1] PASSED
test_spam.py::test_spam[2] PASSED
test_spam.py::test_spam[3] FAILED
test_spam.py::test_spam[4] SKIPPED
test_spam.py::test_spam[5] SKIPPED
test_spam.py::test_spam[6] SKIPPED
test_spam.py::test_spam[7] SKIPPED
test_spam.py::test_spam[8] SKIPPED
test_spam.py::test_spam[9] SKIPPED

========================================= FAILURES ========================================
_______________________________________ test_spam[3] ______________________________________

i = 3

    @pytest.mark.parametrize('i', range(10))
    def test_spam(i):
>       assert i != 3
E       assert 3 != 3

test_spam.py:5: AssertionError
====================== 1 failed, 3 passed, 6 skipped in 0.06 seconds ======================

Edit: working with custom markers

def pytest_runtest_makereport(item, call):
    markers = {marker.name for marker in item.iter_markers()}
    if call.excinfo is not None and 'skiprest' in markers:
        item.session.failednames.add(item.originalname)

def pytest_runtest_setup(item):
    markers = {marker.name for marker in item.iter_markers()}
    if item.originalname in item.session.failednames and 'skiprest' in markers:
        pytest.skip(item.name)

Usage:

@pytest.mark.skiprest
@pytest.mark.parametrize('somearg', ['a', 'b', 'c'])
def test_marked(somearg):
    ...

Upvotes: 7

Related Questions