jacg
jacg

Reputation: 2120

pytest: Reusable tests for different implementations of the same interface

Imagine I have implemented a utility (maybe a class) called Bar in a module foo, and have written the following tests for it.

test_foo.py:

from foo import Bar as Implementation
from pytest import mark

@mark.parametrize(<args>, <test data set 1>)
def test_one(<args>):
    <do something with Implementation and args>

@mark.parametrize(<args>, <test data set 2>)
def test_two(<args>):
    <do something else with Implementation and args>

<more such tests>

Now imagine that, in the future I expect different implementations of the same interface to be written. I would like those implementations to be able to reuse the tests that were written for the above test suite: The only things that need to change are

  1. The import of the Implementation
  2. <test data set 1>, <test data set 2> etc.

So I am looking for a way to write the above tests in a reusable way, that would allow authors of new implementations of the interface to be able to use the tests by injecting the implementation and the test data into them, without having to modify the file containing the original specification of the tests.

What would be a good, idiomatic way of doing this in pytest?

====================================================================

====================================================================

Here is a unittest version that (isn't pretty but) works.

define_tests.py:

# Single, reusable definition of tests for the interface. Authors of
# new implementations of the interface merely have to provide the test
# data, as class attributes of a class which inherits
# unittest.TestCase AND this class.
class TheTests():

    def test_foo(self):
        # Faking pytest.mark.parametrize by looping
        for args, in_, out in self.test_foo_data:
            self.assertEqual(self.Implementation(*args).foo(in_),
                             out)

    def test_bar(self):
        # Faking pytest.mark.parametrize by looping
        for args, in_, out in self.test_bar_data:
            self.assertEqual(self.Implementation(*args).bar(in_),
                             out)

v1.py:

# One implementation of the interface
class Implementation:

    def __init__(self, a,b):
        self.n = a+b

    def foo(self, n):
        return self.n + n

    def bar(self, n):
        return self.n - n

v1_test.py:

# Test for one implementation of the interface
from v1 import Implementation
from define_tests import TheTests
from unittest import TestCase

# Hook into testing framework by inheriting unittest.TestCase and reuse
# the tests which *each and every* implementation of the interface must
# pass, by inheritance from define_tests.TheTests
class FooTests(TestCase, TheTests):

    Implementation = Implementation

    test_foo_data = (((1,2), 3,  6),
                     ((4,5), 6, 15))

    test_bar_data = (((1,2), 3,  0),
                     ((4,5), 6,  3))

Anybody (even a client of the library) writing another implementation of this interface

Upvotes: 28

Views: 13925

Answers (5)

therightstuff
therightstuff

Reputation: 1027

I had a similar requirement, but a little more constricted: two sets of test infrastructure for different modules that needed to be in separate packages (for a variety of reasons), but with common tests.

To resolve this, I used fixtures to set an environment variable indicating which module is under testing:

@pytest.fixture(autouse=True)
def module_under_testing():
    os.environ["MODULE_UNDER_TESTING"] = "sample_module"

The common test class looks something like this:

class TestTheModule(unittest.TestCase):
    def test_module_valid(self):
        module_name=os.environ["MODULE_UNDER_TESTING"]
        if module_name == "sample_module":
            import sample_module as imported_module
        else:
            raise Exception(f"{module_name} not supported")
        # start your testing here
        assert imported_module.is_valid = True

In my test folders, my test files simply import the above test class to run them:

from test.common import TestTheModule

Upvotes: 0

feran
feran

Reputation: 373

I did something similar to what @Daniel Barto was saying, adding additional fixtures.

Let's say you have 1 interface and 2 implementations:

class Imp1(InterfaceA):
    pass # Some implementation.
class Imp2(InterfaceA):
    pass # Some implementation.

You can indeed encapsulate testing in subclasses:

@pytest.fixture
def imp_1():
    yield Imp1()

@pytest.fixture
def imp_2():
    yield Imp2()


class InterfaceToBeTested:
    @pytest.fixture
    def imp(self):
        pass
    
    def test_x(self, imp):
        assert imp.test_x()
    
    def test_y(self, imp):
        assert imp.test_y()

class TestImp1(InterfaceToBeTested):
    @pytest.fixture
    def imp(self, imp_1):
        yield imp_1

    def test_1(self, imp):
        assert imp.test_1()

class TestImp2(InterfaceToBeTested):
    @pytest.fixture
    def imp(self, imp_2):
        yield imp_2

Note: Notice how by adding an additional derived class and overriding the fixture that returns the implementation you can run all tests on it, and that in case there are implementation-specific tests, they could be written there as well.

Upvotes: 2

jxramos
jxramos

Reputation: 8276

Conditional Plugin Based Solution

There is in fact a technique that leans on the pytest_plugins list where you can condition its value on something that transcends pytest, namely environment variables and command line arguments. Consider the following:

if os.environ["pytest_env"] == "env_a":
    pytest_plugins = [
        "projX.plugins.env_a",
    ]
elif os.environ["pytest_env"] == "env_b":
    pytest_plugins = [
        "projX.plugins.env_b",
    ]

I authored a GitHub repository to share some pytest experiments demonstrating the above techniques with commentary along the way and test run results. The relevant section to this particular question is the conditional_plugins experiment. https://github.com/jxramos/pytest_behavior

This would position you to use the same test module with two different implementations of an identically named fixture. However you'd need to invoke the test once per each implementation with the selection mechanism singling out the fixture implementation of interest. Therefore you'd need two pytest sessions to accomplish testing the two fixture variations.

In order to reuse the tests you have in place you'd need to establish a root directory higher than the project you're trying to reuse and define a conftest.py file there that does the plugin selection. That still may not be enough because the overriding behavior of the test module and any intermediate conftest files if you leave the directory structure as is. But if you're free to reshuffle files and leave them unchanged, you just need to get the existing conftest file out of the line of the path from the test module to the root directory and rename it so it can be detected as a plugin instead.

Configuration / Command line Selection of Plugins

Pytest actually has a -p command line option where you can list multiple plugins back to back to specify the plugin files. You can learn more of that control by looking in the ini_plugin_selection experiment in the pytest_behavior repo.

Parametrization over Fixture Values

As of this writing this is a work in progress for core pytest functionality but there is a third party plugin pytest-cases which supports the notion where a fixture itself can be used as a parameter to a test case. With that capability you can parametrize over multiple fixtures for the same test case, where each fixture is backed by each API implementation. This sounds like the ideal solution to your use case, however you would still need to decorate the existing test module with new source to permit this parametrization over fixtures which may not be permissible by you.

Take a look at this rich discussion in an open pytest issue #349 Using fixtures in pytest.mark.parametrize, specifically this comment. He links to a concrete example he wrote up that demonstrates the new fixture parametrization syntax.

Commentary

I get the sense that the test fixture hierarchy one can build above a test module all the way up to the execution's root directory is something more oriented towards fixture reuse but not so much test module reuse. If you think about it you can write several fixtures way up in a common subfolder where a bunch of test modules branch out potentially landing deep down in a number of child subdirectories. Each of those test modules would have access to fixtures defined in that parent conftest.py, but without doing extra work they only get one definition per fixture across all those intermediate conftest.py files even if the same name is reused across that hierarchy. The fixture is chosen closest to the test module through the pytest fixture overriding mechanism, but the resolving stops at the test module and does not go past it to any folders beneath the test module where variation might be found. Essentially there's only one path from the test module to the root dir which limits the fixture definitions to one. This gives us a one fixture to many test modules relationship.

Upvotes: 0

Frank T
Frank T

Reputation: 9046

This is a great use case for parametrized test fixtures.

Your code could look something like this:

from foo import Bar, Baz

@pytest.fixture(params=[Bar, Baz])
def Implementation(request):
    return request.param

def test_one(Implementation):
    assert Implementation().frobnicate()

This would have test_one run twice: once where Implementation=Bar and once where Implementation=Baz.

Note that since Implementation is just a fixture, you can change its scope, or do more setup (maybe instantiate the class, maybe configure it somehow).

If used with the pytest.mark.parametrize decorator, pytest will generate all the permutations. For example, assuming the code above, and this code here:

@pytest.mark.parametrize('thing', [1, 2])
def test_two(Implementation, thing):
    assert Implementation(thing).foo == thing

test_two will run four times, with the following configurations:

  • Implementation=Bar, thing=1
  • Implementation=Bar, thing=2
  • Implementation=Baz, thing=1
  • Implementation=Baz, thing=2

Upvotes: 15

Daniel Barton
Daniel Barton

Reputation: 531

You can't do it without class inheritance, but you don't have to use unittest.TestCase. To make it more pytest you can use fixtures.

It allows you for example fixture parametrizing, or use another fixures.

I try create simple example.

class SomeTest:

    @pytest.fixture
    def implementation(self):
        return "A"

    def test_a(self, implementation):
        assert "A" == implementation


class OtherTest(SomeTest):

   @pytest.fixture(params=["B", "C"])
   def implementation(self, request):
       return request.param


def test_a(self, implementation):
    """ the "implementation" fixture is not accessible out of class """ 
    assert "A" == implementation

and second test fails

    def test_a(self, implementation):
>       assert "A" == implementation
E       assert 'A' == 'B'
E         - A
E         + B

    def test_a(self, implementation):
>       assert "A" == implementation
E       assert 'A' == 'C'
E         - A
E         + C

  def test_a(implementation):
        fixture 'implementation' not found

Don't forget you have to define python_class = *Test in pytest.ini

Upvotes: 4

Related Questions