Reputation: 3288
The output of standard profilers is typically clogged up with details on time spent by low-level functions. In a large, complex project, I want to first get a general idea for which parts of my code are taking longer than others.
Specifically, I wonder if there is a way to tell the profiler to report results limited to a specific call depth. For example, setting depth = 0 should show only the total time for an entire python script; depth = 1 could show time for individual lines on the script; depth = 2 could show time for functions called by functions in the script, and so on.
Does such a tool exist?
Upvotes: 1
Views: 936
Reputation: 3206
It might not be exactly what you are looking for, but I personally find pyprof2calltree very useful. It converts the output of the built-in cProfile to a format that is understtod by tools like KCacheGrind. (there are also implementations using a different widget set, e.g. qcachegrind).
Tools like KCacheGrind, among other things, allow you to visualize the call tree of the profiled code, and it's easy too see which callees of a particular calling function (i.e. a top-level main function) consume the most time - check the attached screenshot for a better idea. (Image source: link)
With pyprof2calltree
and KCacheGrind
installed, visualizing a profiler output is just a matter of a single command:
pyprof2calltree -k -i todo_profile.cprof
The -i
option specifies what the input file is, and the -k
switch runs the installed visualizer tool (e.g. KCacheGrind).
Upvotes: 2