lps
lps

Reputation: 1408

How to get pytest fixture data dynamically

I'm trying to define init data for several tests scenarios that test a single api endpoint. I want to do this so that I don't have to produce boiler plate code for multiple iterations of a test where just the data differs. I can't seem to wrap my head around how to do this using the built-in pytest fixtures. Here's essentially what I'm trying to do:

In tests/conftext.py:

import pytest

@pytest.fixture(scope="module")
def data_for_a():
    return "a_data"

@pytest.fixture(scope="module")
def data_for_b():
    return "b_data"

In tests/tests.py

import pytest

# this works
def test_a(data_for_a):
    assert "a_data" == data_for_a

# but I want to do this and it fails:
scenarios = [
    { "name": "a", "data": data_for_a },
    { "name": "b", "data": data_for_b },
]

for scenario in scenarios:
    print(scenario.name, scenario.data)

# desired output:
# "a a_data"
# "b b_data"

I get a NameError: name 'data_for_a' is not defined exception. I've tried various approaches to get this to work, but there seems to be no way around having to pass the fixture as a parameter to the test method - so either define a bunch of boilerplate tests or have a bunch of if/else statements in a single test and pass each fixture explicitly. I don't like either of these options. At the moment it seems like I have to just build my own helper module to pull in this test data, but I'd rather use the built-in mechanism for this. Is there any way to do this?

Upvotes: 1

Views: 5607

Answers (3)

claudio
claudio

Reputation: 178

Here a leaner solution using a combination of indirect arguments and getfixturevalue:

"""
file tests/conftest.py
"""
import pytest


@pytest.fixture
def a_data() -> str:
    return "a_data_str"


@pytest.fixture
def b_data() -> str:
    return "b_data_str"


@pytest.fixture
def make_scenario(request) -> dict[str, str]:
    scenario_name = request.param  # either 'a' or 'b'
    data = request.getfixturevalue(f"{scenario_name}_data")
    return {"name": scenario_name, "data": data}

and

"""
file tests/test_.py
"""
import pytest


@pytest.mark.parametrize(
    "make_scenario",
    ["a", "b"],
    indirect=["make_scenario"],
)
def test_scenario(make_scenario) -> None:
    assert make_scenario["data"].endswith("_str")

Leading to this output

===================================== test session starts =====================================
platform linux -- Python 3.12.4, pytest-8.3.2, pluggy-1.5.0 -- /tmp/dyn_fixt/.venv/bin/python
cachedir: .pytest_cache
rootdir: /tmp/dyn_fixt
configfile: pyproject.toml
collected 2 items                                                                             

tests/test_.py::test_scenario[a] PASSED
tests/test_.py::test_scenario[b] PASSED

====================================== 2 passed in 0.02s ======================================

The scenario names appear in the pytest output and are treated as separate tests.

Upvotes: 2

Mackie Messer
Mackie Messer

Reputation: 1328

It's been a while since you posted, so there's a chance this functionality wasn't built into pytest at the time you posted.

I believe what you're looking for is pytest_generate_tests. You can define this in a conftest.py module (placed in the directory containing the tests to be run), which is automatically parsed prior to running any tests via pytest. This function can be used to 'parametrize' [sic] your test functions or your fixtures dynamically, allowing you to define on the fly the set of inputs over which you would like the test/fixture to iterate.

I've included an example. Consider the following directory structure:

tests
 |
 +-- examples.py
 +-- test_examples.py
 +-- conftest.py

Now let's look at each file...

# examples.py
# -----------
example_1 = {
    "friendship": 0.0,
    "totes": 0.0,
}

example_2 = {
    "friendship": 0.0,
    "totes": 0.0,
}

dont_use_me = {
    "friendship": 1.0,
    "totes": 1.0,
}

...

# test_examples.py
# ----------------
def test_answer(pydict_fixture):
    for k,v in pydict_fixture.items():
        assert v==0.0

...

# conftest.py
# -----------
from os.path import join, dirname, abspath
import imp
import re

def pytest_generate_tests(metafunc):
    this_dir    = dirname(abspath(metafunc.module.__file__))
    #
    if 'pydict_fixture' in metafunc.fixturenames:
        examples_file= join(this_dir, "examples.py")
        examples_module = imp.load_source('examples', examples_file)
        examples_regex = re.compile("example")
        examples = []
        for name, val in examples_module.__dict__.iteritems():
            if examples_regex.search(name):
                examples.append(val)
        metafunc.parametrize('pydict_fixture', examples)

In this particular example, I wanted to manage the test cases in a single, separate file. So, I wrote a pytest_generate_tests function that, before any tests are run, parses examples.py, creates a list of dictionaries whose names include the word 'example', and forces test_answer to be run on each dictionary in the list. So, test_answer will be called twice, once on example_1 and once on example_2. Both tests will pass.

That's the quick-short of it. The most important thing is that the list of inputs is determined dynamically inside of pytest_generate_tests, and the test is run once per item in the list.

However, to be complete in my description of what I wrote here, my pytest_generate_tests function actually creates a list of inputs for every test function (represented by pytest's predefined metafunc variable in pytest_generate_tests) that uses the imaginary pydict_fixture, and looks for the examples.py file in the directory where metafunc resides! So, potentially this could be extended to run a bunch of different tests on a bunch of different examples.py files.

Upvotes: 2

Rob Gwynn-Jones
Rob Gwynn-Jones

Reputation: 687

You can import from your conftest.py like so:

from conftest import data_for_a, data_for_b

or

from conftest import *

which will allow you to reference that function without passing it as an parameter to a test function.

Edit: Note that this is generally not recommended practice according to the official pytest documentation

If you have conftest.py files which do not reside in a python package directory (i.e. one containing an __init__.py) then “import conftest” can be ambiguous because there might be other conftest.py files as well on your PYTHONPATH or sys.path. It is thus good practise for projects to either put conftest.py under a package scope or to never import anything from a conftest.py file.

Upvotes: 1

Related Questions