Reputation: 1475
Can someone suggest what would be the best practice or a suitable library to determine:
I had looked at guppy and meliae, but still can't get granular to the function level? Am I missing something?
UPDATE The need for asking this question is to solve a specific situation which is, the scenario is that we have a set of distributed tasks running on cloud instances, and now we need to reorganize the placement of tasks on right instance types withing the cluster, for example high memory consuming functional tasks would be placed on larger memory instances and so on. When I mean tasks (celery-tasks), these are nothing but plain functions for which we need to now profile their execution usage.
Thanks.
Upvotes: 15
Views: 9289
Reputation: 12069
You may want to look into a CPU profiler
for Python:
http://docs.python.org/library/profile.html
Example output of cProfile.run(command[, filename])
2706 function calls (2004 primitive calls) in 4.504 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
2 0.006 0.003 0.953 0.477 pobject.py:75(save_objects)
43/3 0.533 0.012 0.749 0.250 pobject.py:99(evaluate)
...
Also, memory
needs a profiler too:
open source profilers: PySizer and Heapy
Upvotes: 9