Lou
Lou

Reputation: 2509

Parametrizing multiple tests dynamically in Python

I'm attempting to use Pytest to write a dynamic test suite, where the test data is held in a separate file, e.g. a YAML file or a .csv. I want to run multiple tests, all of which are parameterised from the same file. Let's say I have a testing file test_foo.py, that looks like this:

import pytest

@pytest.mark.parametrize("num1, num2, output", ([2, 2, 4], [3, 7, 10], [48, 52, 100]))
def test_addnums(num1, num2, output):
    assert foo.addnums(num1, num2) == output

@pytest.mark.parametrize("foo, bar", ([1, 2], ['moo', 'mar'], [0.5, 3.14]))
def test_foobar(foo, bar):
    assert type(foo) == type(bar)

Using the parametrize decorator, I can run multiple tests in pytest, and that works as expected:

test_foo.py::test_addnums[2-2-4] PASSED                                                                                                                                                            
test_foo.py::test_addnums[3-7-10] PASSED                                                                                                                                                           
test_foo.py::test_addnums[48-52-100] PASSED                                                                                                                                                        
test_foo.py::test_foobar[1-2] PASSED                                                                                                                                                               
test_foo.py::test_foobar[moo-mar] PASSED                                                                                                                                                           
test_foo.py::test_foobar[0.5-3.14] PASSED

But I want to parameterise these tests dynamically. By which I mean that, I want to write the test data for all tests in a separate file so that when I run pytest, it will apply all the test data I've written to each test function. Let's say I had a YAML file that looked something like:

test_addnums:
  params: [num1, num2, output]
  values:
    - [2, 2, 4]
    - [3, 7, 10]
    - [48, 52, 100]

test_foobar:
  params: [foo, bar]
  values:
    - [1, 2]
    - [moo, mar]
    - [0.5, 3.14]

I would then want to read this YAML file and use the data to parameterise all test functions in my test file.

I'm aware of the pytest_generate_tests hook, and I've been trying to use this to load tests dynamically. I tried adding the same parameters and data values that I previously passed into the parametrize decorator into the metafunc.parametrize hook:

def pytest_generate_tests(metafunc):
    metafunc.parametrize("num1, num2, output", ([2, 2, 4], [3, 7, 10], [48, 52, 100]))
    metafunc.parametrize("foo, bar", ([1, 2], ['moo', 'mar'], [0.5, 3.14]))

def test_addnums(num1, num2, output):
    assert foo.addnums(num1, num2) == output

def test_foobar(foo, bar):
    assert type(foo) == type(bar)

This doesn't work, however, because pytest tries to apply the test data to every function:

collected 0 items / 1 error                                           

=============================== ERRORS ================================
____________________ ERROR collecting test_foo.py _____________________
In test_addnums: function uses no argument 'foo'
======================= short test summary info =======================
ERROR test_foo.py
!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!
========================== 1 error in 0.16s ===========================

What I want to know is: how can I dynamically parameterise multiple tests using pytest? I've introspected pytest using pdb, and from what I can tell, metafunc is only aware of the first test you've defined in the file. In my above example, test_addnums is defined first, so when I print vars(metafunc) in the pdb debugger, it shows these values:

(Pdb) pp vars(metafunc)
{'_arg2fixturedefs': {},
 '_calls': [<_pytest.python.CallSpec2 object at 0x7f4330b6e860>,
            <_pytest.python.CallSpec2 object at 0x7f4330b6e0b8>,
            <_pytest.python.CallSpec2 object at 0x7f4330b6e908>],
 'cls': None,
 'config': <_pytest.config.Config object at 0x7f43310dbdd8>,
 'definition': <FunctionDefinition test_addnums>,
 'fixturenames': ['num1', 'num2', 'output'],
 'function': <function test_addnums at 0x7f4330b5a6a8>,
 'module': <module 'test_foo' from '<PATH>/test_foo.py'>}

But if I switch around the test_foobar and test_addnums functions, and reverse the order of the parametrize calls, it shows information about test_foobar instead.

(Pdb) pp vars(metafunc)
{'_arg2fixturedefs': {},
 '_calls': [<_pytest.python.CallSpec2 object at 0x7f6d20d5e828>,
            <_pytest.python.CallSpec2 object at 0x7f6d20d5e860>,
            <_pytest.python.CallSpec2 object at 0x7f6d20d5e898>],
 'cls': None,
 'config': <_pytest.config.Config object at 0x7f6d212cbd68>,
 'definition': <FunctionDefinition test_foobar>,
 'fixturenames': ['foo', 'bar'],
 'function': <function test_foobar at 0x7f6d20d4a6a8>,
 'module': <module 'test_foo' from '<PATH>/test_foo.py'>}

So it seems like metafunc doesn't actually store information about every test function in my test file. Therefore I can't use fixturenames or function properties, as they only apply to one particular function, not all of them.

If that's the case, then how can I access all of the other test functions and parameterise them individually?

Upvotes: 5

Views: 4717

Answers (2)

Kale Kundert
Kale Kundert

Reputation: 1494

I wrote a package called parametrize_from_file for this exact purpose. It works by providing a decorator that basically does the same thing as @pytest.mark.parametrize, except that it reads parameters from an external file. I think this approach is much simpler than messing around with pytest_generate_tests.

Here's how it would look for the sample data you gave above. First, we need to reorganize the data so that the top level is a dictionary keyed on the test names, the second level is a list of test cases, and the third level is a dictionary of parameter names to parameter values:

test_addnums:
  - num1: 2
    num2: 2
    output: 4

  - num1: 3
    num2: 7
    output: 10

  - num1: 48
    num2: 52
    output: 100

test_foobar:
  - foo: 1
    bar: 2

  - foo: boo
    bar: mar

  - foo: 0.5
    bar: 3.14

Next, we just need to apply the @parametrize_from_file decorator to the tests:

import parametrize_from_file

@parametrize_from_file
def test_addnums(num1, num2, output):
    assert foo.addnums(num1, num2) == output

@parametrize_from_file
def test_foobar(foo, bar):
    assert type(foo) == type(bar)

This assumes that @parameterize_from_file is able to find the parameter file in the default location, which is a file with the same base name as the test script (e.g. test_things.{yml,toml,nt} for test_things.py). But you can also specify a path manually.

Some other features of parametrize_from_file that are worth briefly mentioning, and which would be annoying to implement yourself via pytest_generate_tests:

  • You can specify ids and marks on a per-test-case basis.
  • You can apply a schema to the test cases. I often use this to eval snippets of python code.
  • You can use both @parametrize_from_file and @pytest.mark.parametrize any number of times on the same test function.
  • You'll get good error messages if anything about the parameter file doesn't make sense (e.g. wrong organization, missing names, inconsistent parameter sets, etc.)

Upvotes: 3

MrBean Bremen
MrBean Bremen

Reputation: 16805

You can do this using pytest_generate_tests, as you have tried, you just have to select the correct parameters for parametrization for each function (I put the result of parsing the yaml into a global dict for simplicity):

all_params = {
    "test_addnums": {
        "params": ["num1", "num2", "output"],
        "values":
            [
                [2, 2, 4],
                [3, 7, 10],
                [48, 52, 100]
            ]
    },
    "test_foobar":
        {
            "params": ["foo", "bar"],
            "values": [
                [1, 2],
                ["moo", "mar"],
                [0.5, 3.14]
            ]
        }
}


def pytest_generate_tests(metafunc):
    fct_name = metafunc.function.__name__
    if fct_name in all_params:
        params = all_params[fct_name]
        metafunc.parametrize(params["params"], params["values"])


def test_addnums(num1, num2, output):
    assert num1 + num2 == output


def test_foobar(foo, bar):
    assert type(foo) == type(bar)

Here is the related output:

$python -m pytest -v param_multiple_tests.py
...
collected 6 items

param_multiple_tests.py::test_addnums[2-2-4] PASSED
param_multiple_tests.py::test_addnums[3-7-10] PASSED
param_multiple_tests.py::test_addnums[48-52-100] PASSED
param_multiple_tests.py::test_foobar[1-2] PASSED
param_multiple_tests.py::test_foobar[moo-mar] PASSED
param_multiple_tests.py::test_foobar[0.5-3.14] PASSED
===================== 6 passed in 0.27s =======================

I think what you missed in the documentation is that pytest_generate_tests is called for each test separately. The more common way to use it is to check for the fixturenames instead of the test names, e.g.:

def pytest_generate_tests(metafunc):
    if "foo" in metafunc.fixturenames and "bar" in metafunc.fixturenames:
         metafunc.parametrize(["foo", "bar"], ...)

Upvotes: 8

Related Questions