Reputation: 60689
I am trying to profile our django unittests (if the tests are faster, we'll run 'em more often). I've ran it through python's built in cProfile profiler, producing a pstats file.
However the signal to noise ration is bad. There are too many functions listed. Lots and lots of django internal functions are called when I make one database query. This makes it hard to see what's going on.
Is there anyway I can "roll up" all function calls that are outside a certain directory?
e.g. if I call a python function outside my directory, and it then calls 5 other functions (all outside my directory), then it should roll all those up, so it looks like there was only one function call, and it should show the cumulative time for the whole thing.
This, obviously, is bad if you want to profile (say) Django, but I don't want to do that.
I looked at the pstats.Stats object, but can't see an obvious way to modify this data.
Upvotes: 1
Views: 324
Reputation: 40659
I have little experience with python, but a lot in performance tuning, so here's a possibility:
Rather than run profiling as part of the unit tests, just do overall execution time measurements. If those don't change much, you're OK.
When you do detect a change, or if you just want to make things run faster, use this method. It has a much higher "signal-to-noise ratio", as you put it.
The difference is you're not hoping the profiler can figure out what you need to be looking at. It's more like a break point in a debugger, where the break occurs not at a place of your choosing, but with good probability at a time of unnecessary slowness. If on two or more occasions you see it doing something that might be replaced with something better, on average it will pay off, and fixing it will make other problems easier to find by the same method.
Upvotes: 1