Cesar Kawakami
Cesar Kawakami

Reputation: 233

Testing only affected code in Python

I've been working on a fairly large Python project with a number of tests.

Some specific parts of the application require some CPU-intensive testing, and our approach of testing everything before commit stopped making sense.

We've adopted a tag-based selective testing approach since. The problem is that, as the codebase grows, maintaining said tagging scheme becomes somewhat cumbersome, and I'd like to start studying whether we could build something smarter.

In a previous job the test system was such that it only tested code that was affected by the changes in the commit.

It seems like Mighty Moose employs a similar approach for CLR languages. Using these as inspiration, my question is, what alternatives are there (if any) for smart selective testing in Python projects?

In case there aren't any, what would be good initial approaches for building something like that?

Upvotes: 16

Views: 2638

Answers (8)

Gleb Sevruk
Gleb Sevruk

Reputation: 534

I guess you are looking for a continuous testing tool?

I created a tool that sits in the background and runs only impacted tests: (You will need PyCharm plugin and pycrunch-engine from pip)

https://github.com/gleb-sevruk/pycrunch-engine

This will be particularly useful if you are using PyCharm.

More details are in this answer: https://stackoverflow.com/a/58136374/2377370

Upvotes: 1

Kozyarchuk
Kozyarchuk

Reputation: 21867

We've run into this problem a number of times in the past and have been able to answer it by improving and re-factoring tests. You are not specifying your development practices nor how long it takes you to run your tests. I would say that if you are doing TDD, you tests need to run no more than a few seconds. Anything that runs longer than that you need to move to a server. If your tests take longer than a day too run, then you have a real issue and it'll limit your ability to deliver functionality quickly and effectively.

Upvotes: 0

user1034211
user1034211

Reputation: 49

Couldn't you use something like Fabric? http://docs.fabfile.org/en/1.7/

Upvotes: -1

Terry Jan Reedy
Terry Jan Reedy

Reputation: 19184

Consider turning the question around: What tests need to be excluded to make running the rest tolerable. The CPython test suite in Lib/test excludes resource heavy tests until specifically requested (as they may be on a buildbot). Some of the optional resources are 'cpu' (time), 'largefile' (disk space), and 'network' (connections). (python -m test -h (on 3.x, test.regrtest on 2.x) gives the whole list.)

Unfortunately, I cannot tell you how to do so as 'skip if resource is not available' is a feature of the older test.regrtest runner that the test suite uses. There is an issue on the tracker to add resources to unittest.

What might work in the meantime is something like this: add a machine-specific file, exclusions.py,containing a list of strings like those above. Then import exclusions and skip tests, cases, or modules if the appropriate string is in the list.

Upvotes: 0

robjohncox
robjohncox

Reputation: 3665

The idea of automating the selective testing of parts of your application definitely sounds interesting. However, it feels like this is something that would be much easier to achieve with a statically typed language, but given the dynamic nature of Python it would probably be a serious time investment to get something that can reliably detect all tests affected by a given commit.

When reading your problem, and putting aside the idea of selective testing, the approach that springs to mind is being able to group tests so that you can execute test suites in isolation, enabling a number of useful automated test execution strategies that can shorten the feedback loop such as:

  • Parallel execution of separate test suites on different machines
  • Running tests at different stages of the build pipeline
  • Running some tests on each commit and others on nightly builds.

Therefore, I think your approach of using tags to partition tests into different 'groups' is a smart one, though as you say the management of these becomes difficult with a large test suite. Given this, it may be worth focussing time in building tools to aid in the management of your test suite, particularly the management of your tags. Such a system could be built by gathering information from:

  • Test result output (pass/fail, execution time, logged output)
  • Code coverage output
  • Source code analysis

Good luck, its definitely an interesting problem you are trying to solve, and hope some of these ideas help you.

Upvotes: 2

carpenterjc
carpenterjc

Reputation: 117

If you write the test results to file you can then use make or an similar alternative to determine when it needs to "rebuild" the tests. If you write results to the file, make can compare the date time stamp of the tests with the dependant python files.

Unfortunately Python isn't too good at determining what it depends on, because modules can be imported dynamically, so you can't reliably look at imports to determine affected modules.

I would use a naming convention to allow make to solve this generically. A naive example would be:

%.test_result : %_test.py
python $< > $@

Which defines a new implicit rule to convert between _test.py and test results. Then you can tell make your additional dependencies for you tests, something like this:

my_module_test.py : module1.py module2.py external\module1.py

Upvotes: 0

Joe McMahon
Joe McMahon

Reputation: 3382

A few random thoughts on this subject, based on work I did previously on a Perl codebase with similar "full build is too long" problems:

  • Knowing your dependencies is key to having this work. If module A is dependent on B and C, then you need to test A when either of then is changed. It looks like Snakefood is a good way to get a dictionary that outlines the dependencies in your code; if you take that and translate it into a makefile, then you can simply "make test" on check in and all of the dependencies (and only the needed ones) will be rebuilt and tested.

  • Once you have a makefile, work on making it parallel; if you can run a half-dozen tests in parallel, you'll greatly decrease running time.

Upvotes: 0

aneroid
aneroid

Reputation: 15987

If you are using unittest.TestCase then you can specify which files to execute with the pattern parameter. Then you can execute tests based on the code changed. Even if not using unittest, you should have your tests are organsied by functional area/module so that you can use a similar approach.

Optionally, not an elegant solution to your problem but if each developer/group or functional code area was committed to a separate branch, you could have it executed on your Continuous Testing environment. Once that's completed (and passed), you can merge them into your main trunk/master branch.

A combination of nightly jobs of all tests and per-branch tests every 15-30 minutes (if there are new commits) should suffice.

Upvotes: 0

Related Questions