Ivan Kremnev
Ivan Kremnev

Reputation: 41

Pytest: test only one instance of parametrized fixture

Setting

Suppose I have conftest.py where I defined resources for tests

class ClassifyRequest:
    pass

class OCRRequest:
    pass

@pytest.fixture(params=[ClassifyRequest, OCRRequest])
def api_request(request):
    return request.param()  # return one of the request instances

Then, in test_api.py I use the parametrized fixture to test a service:

def test_service(api_request):
    response = send_request(api_request)
    assert response.ok()

All good, but then I want to test specialized fixture api_request[ClassifyRequest]:

@pytest.mark.usefixture("api_request[ClassifyRequest]")
def test_classification():
    # do something with the api_request fixture

Question

What is the way to specialize a parametrized fixture for a test function? I have two ideas:

  1. Just use a non-parametrized fixture. This inevitably leads to boilerplate code.
  2. Remove explicit parametrization from the fixture decorator and use indirect parametrization like
    @pytest.mark.parametrize("response_class", ["classify", "ocr"])
    def test_api(api_request):
        # do smth
    
    Replacing class parameters with strings equals creating two sources of configuration. What if I want to add another parameter? Do I have to design a new string for it, as well as instantiate an object inside api_request fixture indirectly?
  3. The same, except keep class parametrization and move classes outside conftest.py as pytest can't import names from that module with import statements.

Additional info

pytest==6.0.1

Upvotes: 4

Views: 2955

Answers (4)

Sgene9
Sgene9

Reputation: 186

You could also separate fixtures for ClassifyRequest and use it in your classification test and api_request fixture.

The assumption is that the instantiation of the requests are cheap. If they are expensive you could consider changing the scope of the fixtures.

@pytest.fixture
def classify_request():
    return ClassifyRequest()  # return ClassifyRequest instance

@pytest.fixture
def ocr_request():
    return OCRRequest()  # return OCRRequest instance

@pytest.fixture(params=["classify", "ocr"])
def api_request(request, classify_request, ocr_request):
    # return one of the request instances
    if request.param == "classify":
         return classify_request
    elif request.param == "ocr":
         return ocr_request
    assert False

or else (this is a slight modification of your idea), whitelist the classes you want from test_classification

@pytest.fixture(params=[ClassifyRequest, OCRRequest])
def clazz(request):
    return request.param

@pytest.fixture
def api_request(clazz):
    return clazz()

@pytest.mark.parametrize("clazz", [ClassifyRequest], indirect=True)
def test_classification(api_request):
    # do smth

Upvotes: 0

Ivan Kremnev
Ivan Kremnev

Reputation: 41

Note: this is an preliminary conclusion, I created an issue on pytest Github to see the community's opinion on this topic.


I think the cleanest way to implement the desired specialization is to move the request classes to a separate module and use conftest.py to declare fixture functions only. According to pytest documentation,

If during implementing your tests you realize that you want to use a fixture function from multiple test files you can move it to a conftest.py file.

If you want to make test data from files available to your tests, a good way to do this is by loading these data in a fixture for use by your tests.

This way I can import classes in test modules and parametrize the fixture indirectly through test functions. In my case request classes are very easy to instantiate, so there's no need for a separate fixture for each class, only the aggregate. But generally, it's ok to define a fixture for each resource and an aggregate parametrized fixture.

Upvotes: 0

Arun Kaliraja Baskaran
Arun Kaliraja Baskaran

Reputation: 1086

What you are trying to achieve is to create test case filtering rules. The best place to do that is via the "pytest_collection_modifyitems" hook in Pytest.

This hook will be called once all the tests are collected from the path you give as input to Pytest.

You can put your logic in this hook.

Its always a best practice to associate an ID along with your parameters like below:

@pytest.fixture(params=[ClassifyRequest, OCRRequest], 
                ids=['ClassifyRequest', 'OCRRequest'])
def api_request(request):
    return request.param()  # return one of the request instances

Once you have this, your testcase will be generated as test_classification[ClassifyRequest] and test_classification['OCRRequest']..

If you are running specific set of tests and want the filtering to apply on all the tests, then -k option itself will suffice.

But since here its only on a subset of tests you want to apply the rule, the hook definition can be in the module where the test is present.

Upvotes: 0

Marek Piotrowski
Marek Piotrowski

Reputation: 3076

If all your fixture does is instantiation, I'd opt for parametrizing the test, not the fixture. But, well, if you have to and specialize it manually anyway, you could simply add a wrapper to make the fixture callable:

import pytest
import types

class ClassifyRequest:
    def __init__(self):
        print("ClassifyRequest ctor")

class OCRRequest:
    def __init__(self):
        print("OCRRequest ctor")

def api_request_wrapper(request):
    return request.param()  # return one of the request instances

@pytest.fixture(params=[ClassifyRequest, OCRRequest])
def api_request(request):
    return api_request_wrapper(request)

def test_classification():
    request = types.SimpleNamespace()
    request.param = ClassifyRequest
    instance = api_request_wrapper(request)

Seems a bit hacky, though.

Upvotes: 1

Related Questions