Reputation: 708
I write some plugin to DB - which might change the results received from DB, but it is mostly not expected. I want to know when it happens.
I have some dozens tests and i add more for any function, and i would like to have a system where all the tests being run once aganist the DB without this plugin, and then with the plugin and having the option to compare the results. i need it to be ready to extend with more tests.
currently i can change in fixture if the DB will go up with or without the plugin. is there any option to make the tests run twice when each run with different fixture?
Upvotes: 2
Views: 1103
Reputation: 66171
Unless I misunderstood your question, you can define a parametrized fixture that will select a specific impl based on current parameter (real or mock). Here is a working example using sqlalchemy
with an SQLite database and alchemy-mock
:
import pytest
from unittest import mock
from sqlalchemy import create_engine, Column, String, Integer
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from alchemy_mock.mocking import UnifiedAlchemyMagicMock
Base = declarative_base()
class Item(Base):
__tablename__ = 'items'
id = Column(Integer, primary_key=True)
name = Column(String)
@pytest.fixture
def real_db_session():
engine = create_engine('sqlite:///real.db')
with engine.connect() as conn:
Session = sessionmaker(bind=conn)
Base.metadata.create_all(engine)
session = Session()
sample_item = Item(name='fizz')
session.add(sample_item)
session.commit()
yield session
@pytest.fixture
def mocked_db_session():
session = UnifiedAlchemyMagicMock()
session.add(Item(name='fizz'))
return session
@pytest.fixture(params=('real', 'mock'))
def db_session(request, real_db_session, mocked_db_session):
backend_type = request.param
if backend_type == 'real':
return real_db_session
elif backend_type == 'mock':
return mocked_db_session
Test example:
def test_fizz(db_session):
assert db_session.query(Item).one().name == 'fizz'
Execution yields:
$ pytest -v
======================================= test session starts ========================================
platform linux -- Python 3.6.8, pytest-4.4.2, py-1.8.0, pluggy-0.11.0
cachedir: .pytest_cache
rootdir: /home/hoefling/projects/private/stackoverflow/so-56558823
plugins: xdist-1.28.0, forked-1.0.2, cov-2.7.1
collected 2 items
test_spam.py::test_fizz[real] PASSED [ 50%]
test_spam.py::test_fizz[mock] PASSED [100%]
===================================== 2 passed in 0.18 seconds =====================================
You will need to implement a custom pytest_collection_modifyitems
hook where you can resort the list of collected tests. For example, to run real
tests first, then the rest:
# conftest.py
def pytest_collection_modifyitems(session, config, items):
items.sort(key=lambda item: 'real' in item.name, reverse=True)
This example is based on my answer to the question How can I access the overall test result of a pytest test run during runtime?. Loosely following it:
# conftest.py
def pytest_sessionstart(session):
session.results = dict()
@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
result = outcome.get_result()
if result.when == 'call':
item.session.results[item] = result
@pytest.fixture(scope='session', autouse=True)
def compare_results(request):
yield # wait for all tests to finish
results = request.session.results
# partition test results into reals and mocks
def partition(pred, coll):
first, second = itertools.tee(coll)
return itertools.filterfalse(pred, first), filter(pred, second)
mocks, reals = partition(lambda item: item.name.endswith('[real]'), results.keys())
# process test results in pairs
by_name = operator.attrgetter('name')
for real, mock in zip(sorted(reals, key=by_name), sorted(mocks, key=by_name)):
if results[real].outcome != results[mock].outcome:
pytest.fail(
'A pair of tests has different outcomes:\n'
f'outcome of {real.name} is {results[real].outcome}\n'
f'outcome of {mock.name} is {results[mock].outcome}'
)
Of course, this is just a stub; for example, the comparison will fail on first pair of tests with different outcomes, also the partition of results dict keys will produce uneven lists for reals
and mocks
if you have unparametrized tests etc.
Upvotes: 2