Oddthinking
Oddthinking

Reputation: 25342

Can Python's unittest test in parallel, like nose can?

Python's NOSE testing framework has the concept of running multiple tests in parallel.

The purpose of this is not to test concurrency in the code, but to make tests for code that has "no side-effects, no ordering issues, and no external dependencies" run faster. The performance gain comes from concurrent I/O waits when they are accessing different devices, better use of multi CPUs/cores, and by running time.sleep() statements in parallel.

I believe the same thing could be done with Python's unittest testing framework, by having a plugin Test Runner.

Has anyone had any experience with such a beast, and can they make any recommendations?

Upvotes: 66

Views: 55571

Answers (9)

Oleg Neumyvakin
Oleg Neumyvakin

Reputation: 10322

Gevent projects can use internal Gevent's testing framework: https://www.gevent.org/development/running_tests.html

You can run your tests with the following command:

python -m gevent.tests --config /yourtests/yoursuite/known_failures.py -j 3 --no-combine --package yourtests.yoursuite

known_failures.py can be a copy of src/gevent/tests/known_failures.py or just contains lists: FAILING_TESTS=[], IGNORED_TESTS=[], RUN_ALONE=[]

Upvotes: 0

blhsing
blhsing

Reputation: 107050

Since Python 3.3 you can run the built-in test package with the -j/--multiprocess option to specify the number of worker processes to run the tests in parallel.

By default it runs Python's own regression tests, but you can specify your own test directory with the --testdir option, and use the -m/--match option to filter tests by a glob pattern.

For example, to run all tests in the current directory with 8 worker processes:

python -mtest -j8 --testdir .

Note that the test package is meant for internal use by Python only and the -j option is currently an undocumented feature so use it at your own risk, though the fact that the CPython project on GitHub relies on this feature to speed up its build tests hopefully means that you can also rely on it for the foreseeable future.

Note also the limitation that the test package only looks for test modules prefixed with test_, with no option to specify a different name prefix or pattern, though the test_ prefix is a good naming convention to stick to.

Upvotes: 2

xxks-kkk
xxks-kkk

Reputation: 2608

You can override the unittest.TestSuite and implement some concurrency paradigm. Then, you use your customized TestSuite class just like normal unittest. In the following example, I implement my customized TestSuite class using async:

import unittest
import asyncio

class CustomTestSuite(unittest.TestSuite):
    def run(self, result, debug=False):
        """
        We override the 'run' routine to support the execution of unittest in parallel
        :param result:
        :param debug:
        :return:
        """
        topLevel = False
        if getattr(result, '_testRunEntered', False) is False:
            result._testRunEntered = topLevel = True
        asyncMethod = []
        loop = asyncio.new_event_loop()
        asyncio.set_event_loop(loop)
        for index, test in enumerate(self):
            asyncMethod.append(self.startRunCase(index, test, result))
        if asyncMethod:
            loop.run_until_complete(asyncio.wait(asyncMethod))
        loop.close()
        if topLevel:
            self._tearDownPreviousClass(None, result)
            self._handleModuleTearDown(result)
            result._testRunEntered = False
        return result

    async def startRunCase(self, index, test, result):
        def _isnotsuite(test):
            "A crude way to tell apart testcases and suites with duck-typing"
            try:
                iter(test)
            except TypeError:
                return True
            return False

        loop = asyncio.get_event_loop()
        if result.shouldStop:
            return False

        if _isnotsuite(test):
            self._tearDownPreviousClass(test, result)
            self._handleModuleFixture(test, result)
            self._handleClassSetUp(test, result)
            result._previousTestClass = test.__class__

            if (getattr(test.__class__, '_classSetupFailed', False) or
                    getattr(result, '_moduleSetUpFailed', False)):
                return True

        await loop.run_in_executor(None, test, result)

        if self._cleanup:
            self._removeTestAtIndex(index)

class TestStringMethods(unittest.TestCase):

    def test_upper(self):
        self.assertEqual('foo'.upper(), 'FOO')


    def test_isupper(self):
        self.assertTrue('FOO'.isupper())
        self.assertFalse('Foo'.isupper())


    def test_split(self):
        s = 'hello world'
        self.assertEqual(s.split(), ['hello', 'world'])
        # check that s.split fails when the separator is not a string
        with self.assertRaises(TypeError):
            s.split(2)


if __name__ == '__main__':
    suite = CustomTestSuite()
    suite.addTest(TestStringMethods('test_upper'))
    suite.addTest(TestStringMethods('test_isupper'))
    suite.addTest(TestStringMethods('test_split'))
    unittest.TextTestRunner(verbosity=2).run(suite)

In the main, I just construct my customized TestSuite class CustomTestSuite, add all the test cases, and finally run it.

Upvotes: 5

yakaboskic
yakaboskic

Reputation: 71

Another option that might be easier, if you don't have that many test cases and they are not dependent, is to kick off each test case manually in a separate process.

For instance, open up a couple tmux sessions and then kick off a test case in each session using something like:

python -m unittest -v MyTestModule.MyTestClass.test_n

Upvotes: 7

Shin
Shin

Reputation: 81

If you only need Python3 suport, consider using my fastunit.

I just change few code of unittest, making test case run as coroutines.

It really saved my time.

I just finished it last week, and may not testing enough, if any error happens, please let me know, so that I can make it better, thanks!

Upvotes: 7

张云辉
张云辉

Reputation: 101

Please use pytest-xdist, if you want parallel run.

The pytest-xdist plugin extends py.test with some unique test execution modes:

  • test run parallelization: if you have multiple CPUs or hosts you can use those for a combined test run. This allows to speed up development or to use special resources of remote machines.

[...]

More info: Rohan Dunham's blog

Upvotes: 8

prathik shirolkar
prathik shirolkar

Reputation: 300

If this is what you did initially

runner = unittest.TextTestRunner()
runner.run(suite)

-----------------------------------------

replace it with

from concurrencytest import ConcurrentTestSuite, fork_for_tests

concurrent_suite = ConcurrentTestSuite(suite, fork_for_tests(4))
runner.run(concurrent_suite)

Upvotes: 2

Joe
Joe

Reputation: 3074

The testtools package is an extension of unittest which supports running tests concurrently. It can be used with your old test classes that inherit unittest.TestCase.

For example:

import unittest
import testtools

class MyTester(unittest.TestCase):
    # Tests...

suite = unittest.TestLoader().loadTestsFromTestCase(MyTester)
concurrent_suite = testtools.ConcurrentStreamTestSuite(lambda: ((case, None) for case in suite))
concurrent_suite.run(testtools.StreamResult())

Upvotes: 23

snakehiss
snakehiss

Reputation: 8774

Python unittest's builtin testrunner does not run tests in parallel. It probably wouldn't be too hard write one that did. I've written my own just to reformat the output and time each test. That took maybe 1/2 a day. I think you can swap out the TestSuite class that is used with a derived one that uses multiprocess without much trouble.

Upvotes: 28

Related Questions