Reputation: 1571
I'm using pytest for my selenium tests and wanted to know if it's possible to have multiple assertions in a single test?
I call a function that compares multiple values and I want the test to report on all the values that don't match up. The problem I'm having is that using "assert" or "pytest.fail" stops the test as soon as it finds a value that doesn't match up.
Is there a way to make the test carry on running and report on all values that don't match?
Upvotes: 45
Views: 59061
Reputation: 134
For anyone coming here using newer versions of pytest, there is built in parametrizing: https://docs.pytest.org/en/6.2.x/parametrize.html . As it is showed in documentation, when using this your test will be divided in multiple tests. This example is extracted from there, one test is seen as three tests.
import pytest
@pytest.mark.parametrize("test_input,expected", [("3+5", 8), ("2+4", 6), ("6*9", 42)])
def test_eval(test_input, expected):
assert eval(test_input) == expected
Result:
$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 3 items
test_expectation.py ..F [100%]
================================= FAILURES =================================
____________________________ test_eval[6*9-42] _____________________________
test_input = '6*9', expected = 42
@pytest.mark.parametrize("test_input,expected", [("3+5", 8), ("2+4", 6), ("6*9", 42)])
def test_eval(test_input, expected):
> assert eval(test_input) == expected
E AssertionError: assert 54 == 42
E + where 54 = eval('6*9')
test_expectation.py:6: AssertionError
========================= short test summary info ==========================
FAILED test_expectation.py::test_eval[6*9-42] - AssertionError: assert 54...
======================= 1 failed, 2 passed in 0.12s ========================
Upvotes: 3
Reputation: 40773
There are a couple of options:
Other people have provided examples for the first two, I am going to discuss the last options.
The example provided in the pytest documentation is a little more elaborate. I am going to provide a simpler example.
#!/usr/bin/env python3
"""
Multiple independent asserts using class-scope fixture
"""
import pytest
@pytest.fixture(scope="class")
def data():
"""
Create data for test.
Using scope=class, this fixture is created once per class. That
means each test should exercise care not to alter the fixture data,
or subsequent tests might fail.
"""
fixture_data = dict(a=1, b=2, c=3)
print(f"(data fixture created at {id(fixture_data)}) ", end="")
return fixture_data
class TestItWithFailures:
def test_a_value(self, data):
assert data["a"] == 1
# Modify the data will cause test failure in subsequent tests
data["b"] = 200
data["c"] = 300
def test_b_value(self, data):
# Failed because of previous modification
assert data["b"] == 2
def test_c_value(self, data):
# Failed because of previous modification
assert data["c"] == 3
class TestWithSuccess:
def test_a_value(self, data):
assert data["a"] == 1
def test_b_value(self, data):
assert data["b"] == 2
def test_c_value(self, data):
assert data["c"] == 3
test_it.py::TestItWithFailures::test_a_value (data fixture created at 4366616320) PASSED
test_it.py::TestItWithFailures::test_b_value FAILED
test_it.py::TestItWithFailures::test_c_value FAILED
test_it.py::TestWithSuccess::test_a_value (data fixture created at 4372781312) PASSED
test_it.py::TestWithSuccess::test_b_value PASSED
test_it.py::TestWithSuccess::test_c_value PASSED
TestItWithFailures
class.Upvotes: 2
Reputation: 31
Here's a rather simplistic approach:
import pytest
def test_sample(texts):
flag = True
for text in texts:
if text != "anything":
flag = False
if flag==False:
pytest.fail("text did not match", pytrace=True)
Upvotes: 3
Reputation: 714
Here's an alternative approach called Delayed assert, It pretty much similar to what @Tryph has provided, and gives better stack trace.
The delayed-assert package on PyPI implements this approach. See also the pr4bh4sh/python-delayed-assert repository on GitHub, or install from PyPI using:
pip install delayed-assert
You can use (possibly) any assertion library in combination with python-delayed-assert. Consider it more like a stack trace manager library rather than an assertion. Check this for example uses
This is how the error stack trace looks,
Upvotes: 3
Reputation: 10260
yet another library is available by the author of the 2017 Pragmatic book on pytest, Brian Okken. https://pythontesting.net/books/pytest/ https://github.com/okken/pytest-check
import pytest_check as check
def test_example():
a = 1
b = 2
c = [2, 4, 6]
check.greater(a, b)
check.less_equal(b, a)
check.is_in(a, c, "Is 1 in the list")
check.is_not_in(b, c, "make sure 2 isn't in list")
Upvotes: 5
Reputation: 26845
pytest-assume is "a pytest plugin that allows multiple failures per test". Here's an example of how you would use it (taken from the README
):
import pytest
@pytest.mark.parametrize(('x', 'y'), [(1, 1), (1, 0), (0, 1)])
def test_simple_assume(x, y):
pytest.assume(x == y)
pytest.assume(True)
pytest.assume(False)
Even though some of the assertions fail, they all get evaluated and reported:
======================================== FAILURES =========================================
_________________________________ test_simple_assume[1-1] _________________________________
> pytest.assume(False)
test_assume.py:7
y = 1
x = 1
----------------------------------------
Failed Assumptions:1
_________________________________ test_simple_assume[1-0] _________________________________
> pytest.assume(x == y)
test_assume.py:5
y = 0
x = 1
> pytest.assume(False)
test_assume.py:7
y = 0
x = 1
----------------------------------------
Failed Assumptions:2
_________________________________ test_simple_assume[0-1] _________________________________
> pytest.assume(x == y)
test_assume.py:5
y = 1
x = 0
> pytest.assume(False)
test_assume.py:7
y = 1
x = 0
----------------------------------------
Failed Assumptions:2
================================ 3 failed in 0.02 seconds =================================
Upvotes: 21
Reputation: 6209
As Jon Clements commented, you can fill a list of error messages and then assert the list is empty, displaying each message when the assertion is false.
concretely, it could be something like that:
def test_something(self):
errors = []
# replace assertions by conditions
if not condition_1:
errors.append("an error message")
if not condition_2:
errors.append("an other error message")
# assert no error message has been registered, else print messages
assert not errors, "errors occured:\n{}".format("\n".join(errors))
The original assertions are replaced by if
statements which append messages to an errors
list in case condition are not met.
Then you assert the errors
list is empty (an empty list is False) and make the assertion message contains each message of the errors
list.
You could also make a test generator as described in the nose documentation. I did not find any pytest doc which describes it, but I know that pytest handled this exactly the same manner as nose.
Upvotes: 41