Reputation: 418
I am writing tests using pytest. I have two tests, one depends on the other, I use pytest-dependency==0.5.1 for that. Something odd happens when I have two tests which depend on one another, but both parameterized - the dependent test is skipped even if the independent test succeeds. This is my code:
import pytest
@pytest.mark.parametrize('par1', ['val1', 'val2', 'val3'])
@pytest.mark.dependency()
def test_a(par1):
print('hi from test a')
assert 1 == 1
@pytest.mark.parametrize('par2', ['val21', 'val22', 'val23'])
@pytest.mark.dependency(depends=["test_a"])
def test_b(par2):
print('hi from test c')
when I run pytest, I get:
pytest --log-cli-level=INFO
================================================================================================= test session starts ==================================================================================================
platform linux -- Python 3.7.9, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/username/dev/tests/test
plugins: dependency-0.5.1, mock-3.1.1, anyio-2.0.2, dash-1.16.1, celery-4.4.7, allure-pytest-2.8.21
collected 6 items
test_something.py::test_a[val1] PASSED [ 16%]
test_something.py::test_a[val2] PASSED [ 33%]
test_something.py::test_a[val3] PASSED [ 50%]
test_something.py::test_b[val21] SKIPPED [ 66%]
test_something.py::test_b[val22] SKIPPED [ 83%]
test_something.py::test_b[val23] SKIPPED [100%]
============================================================================================= 3 passed, 3 skipped in 0.05s =============================================================================================
If I take down the parameterization, all is well:
pytest --log-cli-level=INFO
================================================================================================= test session starts ==================================================================================================
platform linux -- Python 3.7.9, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/username/dev/tests/test
plugins: dependency-0.5.1, mock-3.1.1, anyio-2.0.2, dash-1.16.1, celery-4.4.7, allure-pytest-2.8.21
collected 2 items
test_something.py::test_a PASSED [ 50%]
test_something.py::test_b PASSED [100%]
================================================================================================== 2 passed in 0.01s ===================================================================================================
Why is it happening and how do I work around it?
Upvotes: 0
Views: 684
Reputation: 16815
The problem is that the name of the tests contains the test parameter, e.g. test_a[val1]
etc., and pytest-dependency
cannot find a test named test_a
. To solve this you can just add the name to the dependency marker - in this case pytest-dependency
ignores the real test name and uses this one:
import pytest
@pytest.mark.parametrize('par1', ['val1', 'val2', 'val3'])
@pytest.mark.dependency(name='test_a')
def test_a(par1):
print('hi from test a')
assert 1 == 1
@pytest.mark.parametrize('par2', ['val21', 'val22', 'val23'])
@pytest.mark.dependency(depends=['test_a'])
def test_b(par2):
print('hi from test b')
Now the test is executed if all of the test_a
tests succeed.
Upvotes: 4