Dominick Pastore
Dominick Pastore

Reputation: 4425

How to parameterize a test depending on fixture parameters in pytest?

I have a Python program that generates C code according to an input specification. I am writing tests with pytest. Naturally, the testing strategy includes some tests on the generated C code.

For these tests, the plan looks like this:

This way, adding new test cases is as simple as adding a new specification or test case file. There is no need to copy and paste code in the test script.

I imagined it looking something like this:

# Get the list of specification files and test cases programmatically
specification_names = get_list_of_specifications()
test_cases = dict()
for spec in specification_names:
    # get_list_of_test_cases() returns a list of (input, output) tuples
    test_cases[spec] = get_list_of_test_cases(spec)

class GeneratedCode:
    def __init__(spec):
        """Generate the C code for spec in a temp directory"""
        self.name = spec
        ...
    
    def build():
        """Build the generated C code"""
        ...
    
    def run(input):
        """Run the code on given input."""
        ...
    
    def cleanup():
        ...

@pytest.fixture(scope="module", params=specification_names)
def generated_code(request):
    code = GeneratedCode(request.param)
    code.build()
    yield code
    code.cleanup()

@pytest.mark.parametrize('test_input,expected_output', test_cases[???])
def test_generated_code(generated_code, test_input, expected_output):
    assert generated_code.run(test_input) == expected_output

Of course, the problem here is that @pytest.mark.parametrize() can't just use the same set of test cases each time since it depends on the specification the code was generated from. If we can get the parameter for the current fixture, we can look it up in the test_cases dict, but I'm not sure how to do that, or if it's even possible.

Is there a way to accomplish this? Is there some other way I should approach these tests?

Upvotes: 3

Views: 2504

Answers (2)

Dominick Pastore
Dominick Pastore

Reputation: 4425

The indirect argument to @pytest.mark.parametrize can help make this work. It essentially allows parameterizing the fixture from the test function.

specification_names = get_list_of_specifications()
test_cases = []
for spec in specification_names:
    test_cases.extend([(spec, input, output) for (input, output) in
                       get_list_of_test_cases(spec)])

...

@pytest.fixture(scope="module")
def generated_code(request):
    code = GeneratedCode(request.param)
    code.build()
    yield code
    code.cleanup()

@pytest.mark.parametrize(
        'generated_code,test_input,expected_output',
        test_cases,
        indirect=['generated_code'],
        scope="module" # <-- This is important!
)
def test_generated_code(generated_code, test_input, expected_output):
    assert generated_code.run(test_input) == expected_output

Note the scope="module" in the parametrize decorator. If not specified, it would default to 'function', and in some cases (including this one), that seems to take precedence over the fixture's specified scope.

The details for that are quite fuzzy to me. The documentation on what scope even means for @pytest.mark.parameterize is not very clear. But, it seems if all the parameters in parametrize are indirect, the fixture uses its own scope, otherwise it uses the scope from parametrize. But also, if you have multiple test functions using the same fixture with indirect, they often end up in different scopes regardless of what you specify, and I'm not sure why. This is an area that was previously buggy, and it's possible it might still be.

In any case, the code above should do what you want, but it might be a good idea to treat the fixture scope more as a performance optimization and not rely on it for correct test behavior (which it sounds like you were already doing).

Upvotes: 2

jmunsch
jmunsch

Reputation: 24089

Might be able to wire together the data by passing the spec back as part of a tuple in generated_code.

@pytest.fixture(scope="module", params=specification_names)
def generated_code(spec):
    code = GeneratedCode(spec)
    code.build()
    yield code, spec
    code.cleanup()

def test_generated_code(generated_code):
    code, spec = generated_code
    test_input, expected_output = test_cases[spec]
    assert generated_code.run(test_input) == expected_output```

Another way to do this that I can think of is to use subTest, if you have access to unittest, part of the python standard library:

import unittest

class TestSequence(unittest.TestCase):

    def _setup(self, spec):
        self.code = GeneratedCode(spec)
        self.code.build()

    def tearDown(self):
        self.code.cleanup()

    def test_generated_code(self):
        for spec, (test_input, expected_output) in test_cases.items():
            with self.subTest(spec):
                self._setup(spec)
                assert self.code.run(test_input) == expected_output

Upvotes: 2

Related Questions