Wassinger
Wassinger

Reputation: 387

Testing performance in unittest

I have a specialized string processing method and I want tests so I can easily check if it's working correctly or not.

I have written several tests for correctness of the result using unittest and I am satisfied with the result. My workflow is to use the command "Run Unittests in [FOLDER]" in PyCharm, and then export the results to HTML.

The algorithm I used is a bit complicated so I want to make sure I don't accidentally make it unreasonably inefficient. So I also want a test that checks for this. However, tests are usually supposed to have an objective pass/fail result - so I'm not sure how to implement my performance check.

I've found a simple benchmark: I see how long it takes for my function to process a string, and compare it to how long string.split() takes. Of course both methods are too fast, so I run them a few thousand times. Since these are both string processing methods, I figure the comparison is reasonable as a first approximation. Then I've created a test method which takes the ratio of the run time of my method to split, and checks that it is not more than 1000. "Within 3 orders of magnitude of the library function" seems like "close enough" for me, so I have it working okay now.

However, it would be nice to know what the ratio actually was. For example, if I rewrite the method to fix a bug, but it ends up running twice as slow, this is something I would be interested in knowing (even if it still meets my "3 orders of magnitude" requirement). What I would like is for the HTML test report show the actual measurement result along with pass/fail, so that I can manually inspect it and decide if it needs further attention (even if it passes). The HTML report already shows how long a given test took to run, but this includes not only the runtime of my function, but also the benchmark function and various setup/teardown that is irrelevant to what I'm measuring (how fast the function goes through a string).

So, how can I include an arbitrary measurement in my HTML test report?

Upvotes: 3

Views: 2808

Answers (1)

Patrick Da Silva
Patrick Da Silva

Reputation: 1954

This is perhaps a bit late, but for the sake of completeness...

An easy thing you can do is to use the timeit python module and use it within your tests to evaluate performance. Then you can use the values returned by timeit and add a line looking like

self.assertLess(timer_value, performance_threshold)

You can let your performance threshold to be also a timer value computed from a given set of strings on which you perform string.split(" "), which you can compute on the fly in the unittest as a cheap way, or compute it in advance and save the time value if your test is not going to change much over time. You'll get a test failure if your computation takes too long because that timer value will then be greater than the threshold that you set.

I'm not aware of any timing feature in the unittest module, but you can already do a lot with the time and timeit module. This guy played a bit with that, so perhaps you can have a read.

Hope that helps,

Upvotes: 4

Related Questions