Reputation: 16122
I have a simple python script however it displays a much higher execution time when it's run for the first time in a while. If I execute it immediately after it's faster by a few factors.
This script is run on a private test server with no applications running on it so I don't think a lack of system resources is what is causing it to run slower.
#!/usr/bin/env python
import redis,time,sys
print "hello"
$ time python test.py
real 0m0.149s
user 0m0.072s
sys 0m0.076s
$ time python test.py
real 0m0.051s
user 0m0.020s
sys 0m0.028s
Can anyone explain the variance in the execution time?
I've ran similar tests for php scripts that include external scripts and there's negligible variance in the execution time of that script.
This variance affects my application because such scripts are called several times and cause the response to be delivered between 70ms and 450ms.
Upvotes: 3
Views: 304
Reputation: 1121834
There can be several factors. Two I can think off of right now:
Initial byte compilation.
Python caches the compiled bytecode in .pyc
files, on a first run that file needs to be created, subsequent runs only need to verify the timestamp on the byte code cache.
Disk caching
The Python interpreter, the 3 libraries you refer to directly, anything those libraries use, all need to be loaded from disk, quite apart from the script and it's bytecode cache. The OS caches such files for faster access.
If you ran other things on the same system, those files will be flushed from the cache and need to be loaded again.
The same applies to directory listings; the checks for where to find the modules in the module search path and tests for bytecode caches all are sped up by cached directory information.
If such startup times affect your application, consider creating a daemon that services these tasks as a service. RPC calls (using sockets or localhost network connections) will almost always beat those startup costs. A message queue could provide you with the architecture for such a daemon.
Upvotes: 4