Reputation: 4435
Is there a way to measure time with high-precision in Python --- more precise than one second? I doubt that there is a cross-platform way of doing that; I'm interesting in high precision time on Unix, particularly Solaris running on a Sun SPARC machine.
timeit seems to be capable of high-precision time measurement, but rather than measure how long a code snippet takes, I'd like to directly access the time values.
Upvotes: 94
Views: 190245
Reputation: 52449
Among time.monotonic_ns()
, time.perf_counter_ns()
, and time.time_ns()
, only time.perf_counter_ns()
has sub-microsecond precision on both Linux and Windows.
Many people may not understand the difference between resolution, precision, and accuracy, and may mistakenly think precision timing is easier and more-accessible than it really is. Remember, in this context of software timing:
Using my time_monotonic_ns__get_precision.py
program below, here are my results tested on my Dell Precision 5570 Pentium i9 (Linux) and i7 (Windows 11) high-end 20-thread laptops. Your results will vary based on your hardware and OS:
-------------------------------------------------------------------------------
1. time.monotonic_ns()
-------------------------------------------------------------------------------
Resolution Precision
---------- ---------
Linux: 1 ns 0.070 us +/- 0.118 us (70 ns +/- 118 ns)
Windows: 1 ns 16000.000 us +/- 486.897 us (16 ms +/- 0.487 ms)
-------------------------------------------------------------------------------
2. time.perf_counter_ns()
-------------------------------------------------------------------------------
Resolution Precision
---------- ---------
Linux: 1 ns 0.069 us +/- 0.070 us ( 69 ns +/- 70 ns)
Windows: 1 ns 0.100 us +/- 0.021 us (100 ns +/- 21 ns)
-------------------------------------------------------------------------------
3. time.time_ns()
-------------------------------------------------------------------------------
Resolution Precision
---------- ---------
Linux: 1 ns 0.074 us +/- 0.226 us (74 ns +/- 226 ns)
Windows: 1 ns 10134.354 us +/- 5201.053 us (10.134 ms +/- 5.201 ms)
Notice that even though all 3 functions have 1 ns resolution, only time.perf_counter_ns()
has sub-microsecond precision on both Linux and Windows. The other two functions have sub-microsecond precision only on Linux, but are horrible (low precision) on Windows.
If using Python 3.7 or later, use the modern, cross-platform time
module functions such as time.monotonic_ns()
, time.perf_counter_ns()
, and time.time_ns()
, here: https://docs.python.org/3/library/time.html#time.monotonic_ns.
import time
# For Unix, Linux, Windows, etc.
time_ns = time.monotonic_ns() # note: unspecified epoch
time_ns = time.perf_counter_ns() # **best precision**
time_ns = time.time_ns() # known epoch
# Unix or Linux only
time_ns = time.clock_gettime_ns()
# etc. etc. There are others. See the link above.
See also this note from my other answer from 2016, here: How can I get millisecond and microsecond-resolution timestamps in Python?:
You might also try
time.clock_gettime_ns()
on Unix or Linux systems. Based on its name, it appears to call the underlyingclock_gettime()
C function which I use in mynanos()
function in C in my answer here and in my C Unix/Linux library here: timinglib.c.
As a quick test, you can run the following to get a feel for what the minimum resolution is on your particular hardware and OS. I have tested and run this on both Linux and Windows:
python/time_monotonic_ns__get_precision.py from my eRCaGuy_hello_world repo:
#!/usr/bin/env python3
import os
import pandas as pd
import time
SAMPLE_SIZE_DEFAULT = 20000
# For cases where Windows may have really crappy 16ms precision, we need a
# significantly larger sample size.
SAMPLE_SIZE_MIN_FOR_WINDOWS = 20000000
DEBUG = False # Set to True to enable debug prints
def debug_print(*args, **kwargs):
if DEBUG:
print(*args, **kwargs)
def print_bar():
debug_print("="*56, "\n")
def process_timestamps(timestamps_ns, output_stats_header_str):
"""
Process the timestamps list to determine the time precision of the system.
"""
# Create a pandas DataFrame for efficient analysis of large datasets
df = pd.DataFrame({"timestamp_ns": timestamps_ns}, dtype='int64')
debug_print(f"df original:\n{df}")
print_bar()
# Remove duplicate timestamps. On Linux, there won't be any, because it has
# sub-microsecond precision, but on Windows, the dataset may be mostly
# duplicates because repeated calls to `time.monotonic_ns()` may return the
# same value if called in quick succession.
df.drop_duplicates(inplace=True)
debug_print(f"df no duplicates:\n{df}")
print_bar()
if len(df) < 2:
print("Error: not enough data to calculate time precision. Try \n"
"increasing `SAMPLE_SIZE` by a factor of 10, and try again.")
exit(1)
# Now calculate the time differences between the timestamps.
df["previous_timestamp_ns"] = df["timestamp_ns"].shift(1)
df = df.dropna() # remove NaN row
df["previous_timestamp_ns"] = df["previous_timestamp_ns"].astype('int64')
df["delta_time_us"] = (
df["timestamp_ns"] - df["previous_timestamp_ns"]) / 1e3
debug_print(f"df:\n{df}")
print_bar()
# Output statistics
mean = df["delta_time_us"].mean()
median = df["delta_time_us"].median()
mode = df["delta_time_us"].mode()[0]
stdev = df["delta_time_us"].std()
print(f">>>>>>>>>> {output_stats_header_str} <<<<<<<<<<")
print(f"Mean: {mean:.3f} us")
print(f"Median: {median:.3f} us")
print(f"Mode: {mode:.3f} us")
print(f"Stdev: {stdev:.3f} us")
print(f"FINAL ANSWER: time precision on this system: "
+ f"{median:.3f} +/- {stdev:.3f} us\n")
# =============================================================================
# 1. Test `time.monotonic_ns()`
# =============================================================================
SAMPLE_SIZE = SAMPLE_SIZE_DEFAULT
if os.name == 'nt':
# The OS is Windows
if SAMPLE_SIZE < SAMPLE_SIZE_MIN_FOR_WINDOWS:
SAMPLE_SIZE = SAMPLE_SIZE_MIN_FOR_WINDOWS
print(f"Detected: running on Windows. Using a larger SAMPLE_SIZE of "
f"{SAMPLE_SIZE}.\n")
# Gather timestamps with zero delays between them
# - preallocated list, so that no dynamic memory allocation will happen in the
# loop below
timestamps_ns = [None]*SAMPLE_SIZE
for i in range(len(timestamps_ns)):
timestamps_ns[i] = time.monotonic_ns()
process_timestamps(timestamps_ns, "1. time.monotonic_ns()")
# =============================================================================
# 2. Test `time.perf_counter_ns()`
# =============================================================================
SAMPLE_SIZE = SAMPLE_SIZE_DEFAULT
timestamps_ns = [None]*SAMPLE_SIZE
for i in range(len(timestamps_ns)):
timestamps_ns[i] = time.perf_counter_ns()
process_timestamps(timestamps_ns, "2. time.perf_counter_ns()")
# =============================================================================
# 3. Test `time.time_ns()`
# =============================================================================
SAMPLE_SIZE = SAMPLE_SIZE_DEFAULT
if os.name == 'nt':
# The OS is Windows
if SAMPLE_SIZE < SAMPLE_SIZE_MIN_FOR_WINDOWS:
SAMPLE_SIZE = SAMPLE_SIZE_MIN_FOR_WINDOWS
print(f"Detected: running on Windows. Using a larger SAMPLE_SIZE of "
f"{SAMPLE_SIZE}.\n")
timestamps_ns = [None]*SAMPLE_SIZE
for i in range(len(timestamps_ns)):
timestamps_ns[i] = time.time_ns()
process_timestamps(timestamps_ns, "3. time.time_ns()")
Here is my run and output when running it on a couple of high-end Dell Precision 5570 Pentium i9 (Linux) and i7 (Windows 11) 20-thread laptops.
On Linux Ubuntu 22.04 (python3 --version
shows Python 3.10.12
):
eRCaGuy_hello_world$ time python/time_monotonic_ns__get_precision.py
>>>>>>>>>> 1. time.monotonic_ns() <<<<<<<<<<
Mean: 0.081 us
Median: 0.070 us
Mode: 0.070 us
Stdev: 0.118 us
FINAL ANSWER: time precision on this system: 0.070 +/- 0.118 us
>>>>>>>>>> 2. time.perf_counter_ns() <<<<<<<<<<
Mean: 0.076 us
Median: 0.069 us
Mode: 0.068 us
Stdev: 0.070 us
FINAL ANSWER: time precision on this system: 0.069 +/- 0.070 us
>>>>>>>>>> 3. time.time_ns() <<<<<<<<<<
Mean: 0.080 us
Median: 0.074 us
Mode: -0.030 us
Stdev: 0.226 us
FINAL ANSWER: time precision on this system: 0.074 +/- 0.226 us
real 0m0.264s
user 0m0.802s
sys 0m1.124s
On Windows 11 (python --version
shows Python 3.12.1
):
eRCaGuy_hello_world$ time python/time_monotonic_ns__get_precision.py
Detected: running on Windows. Using a larger SAMPLE_SIZE of 20000000.
>>>>>>>>>> 1. time.monotonic_ns() <<<<<<<<<<
Mean: 15625.000 us
Median: 16000.000 us
Mode: 16000.000 us
Stdev: 486.897 us
FINAL ANSWER: time precision on this system: 16000.000 +/- 486.897 us
>>>>>>>>>> 2. time.perf_counter_ns() <<<<<<<<<<
Mean: 0.101 us
Median: 0.100 us
Mode: 0.100 us
Stdev: 0.021 us
FINAL ANSWER: time precision on this system: 0.100 +/- 0.021 us
Detected: running on Windows. Using a larger SAMPLE_SIZE of 20000000.
>>>>>>>>>> 3. time.time_ns() <<<<<<<<<<
Mean: 9639.436 us
Median: 10134.354 us
Mode: 610.144 us
Stdev: 5201.053 us
FINAL ANSWER: time precision on this system: 10134.354 +/- 5201.053 us
real 0m8.301s
user 0m0.000s
sys 0m0.000s
The median value in each case is the most representative of the typical resolution you can expect on your system because using the median value removes both the time jitter and outliers (unlike the mean, which removes the time jitter but not the outliers).
This proves conclusively that only the time.perf_counter_ns()
function has both sub-microsecond resolution and precision on both Windows and Linux, which is what I needed to know the most.
Note that when using time.monotonic()
or time.monotonic_ns()
, the official documentation says:
The reference point of the returned value is undefined, so that only the difference between the results of two calls is valid.
So, if you need an absolute datetime type timestamp instead of a precision relative timestamp, which absolute datetime contains information like year, month, date, etc., then you should consider using datetime
instead. See this answer here, my comment below it, and the official datetime
documentation here and specifically for datetime.now()
here. Here is how to get a timestamp with that module:
from datetime import datetime
now_datetime_object = datetime.now()
Do not expect it to have the resolution nor precision nor monotonicity of time.clock_gettime_ns()
, however. So, for timing small differences or doing precision timing work, prefer time.clock_gettime_ns()
instead.
Another option is time.time()
--also not guaranteeed to have a "better precision than 1 second". You can convert it back to a datetime using time.localtime()
or time.gmtime()
. See here. Here's how to use it:
>>> import time
>>> time.time()
1691442858.8543699
>>> time.localtime(time.time())
time.struct_time(tm_year=2023, tm_mon=8, tm_mday=7, tm_hour=14, tm_min=14, tm_sec=36, tm_wday=0, tm_yday=219, tm_isdst=0)
Or, even better: time.time_ns()
:
>>> import time
>>> time.time_ns()
1691443244384978570
>>> time.localtime(time.time_ns()/1e9)
time.struct_time(tm_year=2023, tm_mon=8, tm_mday=7, tm_hour=14, tm_min=20, tm_sec=57, tm_wday=0, tm_yday=219, tm_isdst=0)
>>> time.time_ns()/1e9
1691443263.0889063
On Windows, in Python 3.3 or later, you can use time.perf_counter()
, as shown by @ereOn here. See: https://docs.python.org/3/library/time.html#time.perf_counter. This provides roughly a 0.5us-resolution timestamp, in floating point seconds. Ex:
import time
# For Python 3.3 or later
time_sec = time.perf_counter() # Windows only, I think
# or on Unix or Linux (I think only those)
time_sec = time.monotonic()
Summary:
See my other answer from 2016 here for 0.5-us-resolution timestamps, or better, in Windows and Linux, and for versions of Python as old as 3.0, 3.2 or 3.2 even! We do this by calling C or C++ shared object libraries (.dll on Windows, or .so on Unix or Linux) using the ctypes
module in Python.
I provide these functions:
millis()
micros()
delay()
delayMicroseconds()
Download GS_timing.py
from my eRCaGuy_PyTime repo, then do:
import GS_timing
time_ms = GS_timing.millis()
time_us = GS_timing.micros()
GS_timing.delay(10) # delay 10 ms
GS_timing.delayMicroseconds(10000) # delay 10000 us
Details:
In 2016, I was working in Python 3.0 or 3.1, on an embedded project on a Raspberry Pi, and which I tested and ran frequently on Windows also. I needed nanosecond resolution for some precise timing I was doing with ultrasonic sensors. The Python language at the time did not provide this resolution, and neither did any answer to this question, so I came up with this separate Q&A here: How can I get millisecond and microsecond-resolution timestamps in Python?. I stated in the question at the time:
I read other answers before asking this question, but they rely on the
time
module, which prior to Python 3.3 did NOT have any type of guaranteed resolution whatsoever. Its resolution is all over the place. The most upvoted answer here quotes a Windows resolution (using their answer) of 16 ms, which is 32000 times worse than my answer provided here (0.5 us resolution). Again, I needed 1 ms and 1 us (or similar) resolutions, not 16000 us resolution.
Zero, I repeat: zero answers here on 12 July 2016 had any resolution better than 16-ms for Windows in Python 3.1. So, I came up with this answer which has 0.5us or better resolution in pre-Python 3.3 in Windows and Linux. If you need something like that for an older version of Python, or if you just want to learn how to call C or C++ dynamic libraries in Python (.dll "dynamically linked library" files in Windows, or .so "shared object" library files in Unix or Linux) using the ctypes
library, see my other answer here.
DataFrame
s without iteratingUpvotes: 4
Reputation: 5240
The standard time.time()
function provides sub-second precision, though that precision varies by platform. For Linux and Mac precision is +-
1 microsecond or 0.001 milliseconds. Python on Windows with Python < 3.7 uses +-
16 milliseconds precision due to clock implementation problems due to process interrupts. The timeit
module can provide higher resolution if you're measuring execution time.
>>> import time
>>> time.time() #return seconds from epoch
1261367718.971009
Python 3.7 introduces new functions to the time
module that provide higher resolution for longer time periods:
>>> import time
>>> time.time_ns()
1530228533161016309
>>> time.time_ns() / (10 ** 9) # convert to floating-point seconds
1530228544.0792289
Upvotes: 109
Reputation: 11
Here is a python 3 solution for Windows building upon the answer posted above by CyberSnoopy (using GetSystemTimePreciseAsFileTime). We borrow some code from jfs
Python datetime.utcnow() returning incorrect datetime
and get a precise timestamp (Unix time) in microseconds
#! python3
import ctypes.wintypes
def utcnow_microseconds():
system_time = ctypes.wintypes.FILETIME()
#system call used by time.time()
#ctypes.windll.kernel32.GetSystemTimeAsFileTime(ctypes.byref(system_time))
#getting high precision:
ctypes.windll.kernel32.GetSystemTimePreciseAsFileTime(ctypes.byref(system_time))
large = (system_time.dwHighDateTime << 32) + system_time.dwLowDateTime
return large // 10 - 11644473600000000
for ii in range(5):
print(utcnow_microseconds()*1e-6)
References
https://learn.microsoft.com/en-us/windows/win32/sysinfo/time-functions
https://learn.microsoft.com/en-us/windows/win32/api/sysinfoapi/nf-sysinfoapi-getsystemtimepreciseasfiletime
https://support.microsoft.com/en-us/help/167296/how-to-convert-a-unix-time-t-to-a-win32-filetime-or-systemtime
Upvotes: 1
Reputation: 41168
Python 3.7 introduces 6 new time functions with nanosecond resolution, for example instead of time.time()
you can use time.time_ns()
to avoid floating point imprecision issues:
import time
print(time.time())
# 1522915698.3436284
print(time.time_ns())
# 1522915698343660458
These 6 functions are described in PEP 564:
time.clock_gettime_ns(clock_id)
time.clock_settime_ns(clock_id, time:int)
time.monotonic_ns()
time.perf_counter_ns()
time.process_time_ns()
time.time_ns()
These functions are similar to the version without the _ns suffix, but return a number of nanoseconds as a Python int.
Upvotes: 11
Reputation: 3731
On the same win10 OS system using "two distinct method approaches" there appears to be an approximate "500 ns" time difference. If you care about nanosecond precision check my code below.
The modifications of the code is based on code from user cod3monk3y
and Kevin S
.
OS: python 3.7.3 (default, date, time) [MSC v.1915 64 bit (AMD64)]
def measure1(mean):
for i in range(1, my_range+1):
x = time.time()
td = x- samples1[i-1][2]
if i-1 == 0:
td = 0
td = f'{td:.6f}'
samples1.append((i, td, x))
mean += float(td)
print (mean)
sys.stdout.flush()
time.sleep(0.001)
mean = mean/my_range
return mean
def measure2(nr):
t0 = time.time()
t1 = t0
while t1 == t0:
t1 = time.time()
td = t1-t0
td = f'{td:.6f}'
return (nr, td, t1, t0)
samples1 = [(0, 0, 0)]
my_range = 10
mean1 = 0.0
mean2 = 0.0
mean1 = measure1(mean1)
for i in samples1: print (i)
print ('...\n\n')
samples2 = [measure2(i) for i in range(11)]
for s in samples2:
#print(f'time delta: {s:.4f} seconds')
mean2 += float(s[1])
print (s)
mean2 = mean2/my_range
print ('\nMean1 : ' f'{mean1:.6f}')
print ('Mean2 : ' f'{mean2:.6f}')
The measure1 results:
nr, td, t0
(0, 0, 0)
(1, '0.000000', 1562929696.617988)
(2, '0.002000', 1562929696.6199884)
(3, '0.001001', 1562929696.620989)
(4, '0.001001', 1562929696.62199)
(5, '0.001001', 1562929696.6229906)
(6, '0.001001', 1562929696.6239917)
(7, '0.001001', 1562929696.6249924)
(8, '0.001000', 1562929696.6259928)
(9, '0.001001', 1562929696.6269937)
(10, '0.001001', 1562929696.6279945)
...
The measure2 results:
nr, td , t1, t0
(0, '0.000500', 1562929696.6294951, 1562929696.6289947)
(1, '0.000501', 1562929696.6299958, 1562929696.6294951)
(2, '0.000500', 1562929696.6304958, 1562929696.6299958)
(3, '0.000500', 1562929696.6309962, 1562929696.6304958)
(4, '0.000500', 1562929696.6314962, 1562929696.6309962)
(5, '0.000500', 1562929696.6319966, 1562929696.6314962)
(6, '0.000500', 1562929696.632497, 1562929696.6319966)
(7, '0.000500', 1562929696.6329975, 1562929696.632497)
(8, '0.000500', 1562929696.633498, 1562929696.6329975)
(9, '0.000500', 1562929696.6339984, 1562929696.633498)
(10, '0.000500', 1562929696.6344984, 1562929696.6339984)
End result:
Mean1 : 0.001001 # (measure1 function)
Mean2 : 0.000550 # (measure2 function)
Upvotes: 1
Reputation: 960
The original question specifically asked for Unix but multiple answers have touched on Windows, and as a result there is misleading information on windows. The default timer resolution on windows is 15.6ms you can verify here.
Using a slightly modified script from cod3monk3y I can show that windows timer resolution is ~15milliseconds by default. I'm using a tool available here to modify the resolution.
Script:
import time
# measure the smallest time delta by spinning until the time changes
def measure():
t0 = time.time()
t1 = t0
while t1 == t0:
t1 = time.time()
return t1-t0
samples = [measure() for i in range(30)]
for s in samples:
print(f'time delta: {s:.4f} seconds')
These results were gathered on windows 10 pro 64-bit running python 3.7 64-bit.
Upvotes: 4
Reputation: 1028
For those stuck on windows (version >= server 2012 or win 8)and python 2.7,
import ctypes
class FILETIME(ctypes.Structure):
_fields_ = [("dwLowDateTime", ctypes.c_uint),
("dwHighDateTime", ctypes.c_uint)]
def time():
"""Accurate version of time.time() for windows, return UTC time in term of seconds since 01/01/1601
"""
file_time = FILETIME()
ctypes.windll.kernel32.GetSystemTimePreciseAsFileTime(ctypes.byref(file_time))
return (file_time.dwLowDateTime + (file_time.dwHighDateTime << 32)) / 1.0e7
GetSystemTimePreciseAsFileTime function
Upvotes: 1
Reputation: 21
def start(self):
sec_arg = 10.0
cptr = 0
time_start = time.time()
time_init = time.time()
while True:
cptr += 1
time_start = time.time()
time.sleep(((time_init + (sec_arg * cptr)) - time_start ))
# AND YOUR CODE .......
t00 = threading.Thread(name='thread_request', target=self.send_request, args=([]))
t00.start()
Upvotes: -1
Reputation: 355
I observed that the resolution of time.time() is different between Windows 10 Professional and Education versions.
On a Windows 10 Professional machine, the resolution is 1 ms. On a Windows 10 Education machine, the resolution is 16 ms.
Fortunately, there's a tool that increases Python's time resolution in Windows: https://vvvv.org/contribution/windows-system-timer-tool
With this tool, I was able to achieve 1 ms resolution regardless of Windows version. You will need to be keep it running while executing your Python codes.
Upvotes: 2
Reputation: 6711
The comment left by tiho on Mar 27 '14 at 17:21 deserves to be its own answer:
In order to avoid platform-specific code, use timeit.default_timer()
Upvotes: 2
Reputation: 55726
If Python 3 is an option, you have two choices:
time.perf_counter
which always use the most accurate clock on your platform. It does include time spent outside of the process.time.process_time
which returns the CPU time. It does NOT include time spent outside of the process.The difference between the two can be shown with:
from time import (
process_time,
perf_counter,
sleep,
)
print(process_time())
sleep(1)
print(process_time())
print(perf_counter())
sleep(1)
print(perf_counter())
Which outputs:
0.03125
0.03125
2.560001310720671e-07
1.0005455362793145
Upvotes: 32
Reputation: 59
time.clock()
has 13 decimal points on Windows but only two on Linux.
time.time()
has 17 decimals on Linux and 16 on Windows but the actual precision is different.
I don't agree with the documentation that time.clock()
should be used for benchmarking on Unix/Linux. It is not precise enough, so what timer to use depends on operating system.
On Linux, the time resolution is high in time.time()
:
>>> time.time(), time.time()
(1281384913.4374139, 1281384913.4374161)
On Windows, however the time function seems to use the last called number:
>>> time.time()-int(time.time()), time.time()-int(time.time()), time.time()-time.time()
(0.9570000171661377, 0.9570000171661377, 0.0)
Even if I write the calls on different lines in Windows it still returns the same value so the real precision is lower.
So in serious measurements a platform check (import platform, platform.system()
) has to be done in order to determine whether to use time.clock()
or time.time()
.
(Tested on Windows 7 and Ubuntu 9.10 with python 2.6 and 3.1)
Upvotes: 5
Reputation: 9843
David's post was attempting to show what the clock resolution is on Windows. I was confused by his output, so I wrote some code that shows that time.time()
on my Windows 8 x64 laptop has a resolution of 1 msec:
# measure the smallest time delta by spinning until the time changes
def measure():
t0 = time.time()
t1 = t0
while t1 == t0:
t1 = time.time()
return (t0, t1, t1-t0)
samples = [measure() for i in range(10)]
for s in samples:
print s
Which outputs:
(1390455900.085, 1390455900.086, 0.0009999275207519531)
(1390455900.086, 1390455900.087, 0.0009999275207519531)
(1390455900.087, 1390455900.088, 0.0010001659393310547)
(1390455900.088, 1390455900.089, 0.0009999275207519531)
(1390455900.089, 1390455900.09, 0.0009999275207519531)
(1390455900.09, 1390455900.091, 0.0010001659393310547)
(1390455900.091, 1390455900.092, 0.0009999275207519531)
(1390455900.092, 1390455900.093, 0.0009999275207519531)
(1390455900.093, 1390455900.094, 0.0010001659393310547)
(1390455900.094, 1390455900.095, 0.0009999275207519531)
And a way to do a 1000 sample average of the delta:
reduce( lambda a,b:a+b, [measure()[2] for i in range(1000)], 0.0) / 1000.0
Which output on two consecutive runs:
0.001
0.0010009999275207519
So time.time()
on my Windows 8 x64 has a resolution of 1 msec.
A similar run on time.clock()
returns a resolution of 0.4 microseconds:
def measure_clock():
t0 = time.clock()
t1 = time.clock()
while t1 == t0:
t1 = time.clock()
return (t0, t1, t1-t0)
reduce( lambda a,b:a+b, [measure_clock()[2] for i in range(1000000)] )/1000000.0
Returns:
4.3571334791658954e-07
Which is ~0.4e-06
An interesting thing about time.clock()
is that it returns the time since the method was first called, so if you wanted microsecond resolution wall time you could do something like this:
class HighPrecisionWallTime():
def __init__(self,):
self._wall_time_0 = time.time()
self._clock_0 = time.clock()
def sample(self,):
dc = time.clock()-self._clock_0
return self._wall_time_0 + dc
(which would probably drift after a while, but you could correct this occasionally, for example dc > 3600
would correct it every hour)
Upvotes: 27
Reputation: 6461
You can also use time.clock() It counts the time used by the process on Unix and time since the first call to it on Windows. It's more precise than time.time().
It's the usually used function to measure performance.
Just call
import time
t_ = time.clock()
#Your code here
print 'Time in function', time.clock() - t_
EDITED: Ups, I miss the question as you want to know exactly the time, not the time spent...
Upvotes: 14
Reputation: 26699
Python tries hard to use the most precise time function for your platform to implement time.time()
:
/* Implement floattime() for various platforms */
static double
floattime(void)
{
/* There are three ways to get the time:
(1) gettimeofday() -- resolution in microseconds
(2) ftime() -- resolution in milliseconds
(3) time() -- resolution in seconds
In all cases the return value is a float in seconds.
Since on some systems (e.g. SCO ODT 3.0) gettimeofday() may
fail, so we fall back on ftime() or time().
Note: clock resolution does not imply clock accuracy! */
#ifdef HAVE_GETTIMEOFDAY
{
struct timeval t;
#ifdef GETTIMEOFDAY_NO_TZ
if (gettimeofday(&t) == 0)
return (double)t.tv_sec + t.tv_usec*0.000001;
#else /* !GETTIMEOFDAY_NO_TZ */
if (gettimeofday(&t, (struct timezone *)NULL) == 0)
return (double)t.tv_sec + t.tv_usec*0.000001;
#endif /* !GETTIMEOFDAY_NO_TZ */
}
#endif /* !HAVE_GETTIMEOFDAY */
{
#if defined(HAVE_FTIME)
struct timeb t;
ftime(&t);
return (double)t.time + (double)t.millitm * (double)0.001;
#else /* !HAVE_FTIME */
time_t secs;
time(&secs);
return (double)secs;
#endif /* !HAVE_FTIME */
}
}
( from http://svn.python.org/view/python/trunk/Modules/timemodule.c?revision=81756&view=markup )
Upvotes: 26