Reputation: 53
I am trying to log data at with a high sampling rate using a Raspberry Pi 3 B+. In order to achieve a fixed sampling rate, I am delaying the while loop, but I always get a sample rate that is a little less than I specify.
For 2500 Hz I get ~2450 Hz
For 5000 Hz I get ~4800 Hz
For 10000 Hz I get ~9300 Hz
Here is the code that I use to delay the while loop:
import time
count=0
while True:
sample_rate=5000
time_start=time.perf_counter()
count+=1
while (time.perf_counter()-time_start) < (1/sample_rate):
pass
if count == sample_rate:
print(1/(time.perf_counter()-time_start))
count=0
I have also tried updating to Python 3.7 and used time.perf_counter_ns(), but it does not make a difference.
Upvotes: 5
Views: 1444
Reputation: 1916
I think you can fix this pretty easily by rearranging your code as such:
import time
count=0
sample_rate=5000
while True:
time_start=time.perf_counter()
# do all the real stuff here
while (time.perf_counter()-time_start) < (1/sample_rate):
pass
This way python does the waiting after you execute the code, rather than before, so the time the interpreter takes to run it will not be added to your sample rate. As danny said, it's an interpreted language so that might introduce timing inconsistencies, but this way should at least decrease the effect you are seeing.
Edit for proof that this works:
import sys
import time
count=0
sample_rate=int(sys.argv[1])
run_start = time.time()
while True:
time_start=time.time()
a = range(10)
b = range(10)
for x in a:
for y in b:
c = a+b
count += 1
if count == sample_rate*2:
break
while (time.time()-time_start) < (1.0/sample_rate):
pass
real_rate = sample_rate*2/(time.time()-run_start)
print real_rate, real_rate/sample_rate
So the testing code does a solid amount of random junk for 2 seconds and then prints the real rate and the percentage of the actual rate that turns out to be. Here's some results:
~ ><> python t.py 1000
999.378471674 0.999378471674
~ ><> python t.py 2000
1995.98713838 0.99799356919
~ ><> python t.py 5000
4980.90553757 0.996181107514
~ ><> python t.py 10000
9939.73553783 0.993973553783
~ ><> python t.py 40000
38343.706669 0.958592666726
So, not perfect. But definitely better than a ~700Hz drop at a desired 10000. The accepted answer is definitely the right one.
Upvotes: 1
Reputation: 6826
The problem you are seeing is because your code is using the real time each time in the loop when it starts each delay for the period duration - and so time spent in untimed code and jitter due to OS multitasking accumulates, reducing the overall period below what you want to achieve.
To greatly increase the timing accuracy, use the fact that each loop "should" finish at the period (1/sample_rate) after it should have started - and maintain that start time as an absolute calculation rather than the real time, and wait until the period after that absolute start time, and then there is no drift in the timing.
I put your timing into timing_orig and my revised code using absolute times into timing_new - and results are below.
import time
def timing_orig(ratehz,timefun=time.clock):
count=0
while True:
sample_rate=ratehz
time_start=timefun()
count+=1
while (timefun()-time_start) < (1.0/sample_rate):
pass
if count == ratehz:
break
def timing_new(ratehz,timefun=time.clock):
count=0
delta = (1.0/ratehz)
# record the start of the sequence of timed periods
time_start=timefun()
while True:
count+=1
# this period ends delta from "now" (now is the time_start PLUS a number of deltas)
time_next = time_start+delta
# wait until the end time has passed
while timefun()<time_next:
pass
# calculate the idealised "now" as delta from the start of this period
time_start = time_next
if count == ratehz:
break
def timing(functotime,ratehz,ntimes,timefun=time.clock):
starttime = timefun()
for n in range(int(ntimes)):
functotime(ratehz,timefun)
endtime = timefun()
# print endtime-starttime
return ratehz*ntimes/(endtime-starttime)
if __name__=='__main__':
print "new 5000",timing(timing_new,5000.0,10.0)
print "old 5000",timing(timing_orig,5000.0,10.0)
print "new 10000",timing(timing_new,10000.0,10.0)
print "old 10000",timing(timing_orig,10000.0,10.0)
print "new 50000",timing(timing_new,50000.0,10.0)
print "old 50000",timing(timing_orig,50000.0,10.0)
print "new 100000",timing(timing_new,100000.0,10.0)
print "old 100000",timing(timing_orig,100000.0,10.0)
Results:
new 5000 4999.96331002
old 5000 4991.73952992
new 10000 9999.92662005
old 10000 9956.9314274
new 50000 49999.6477761
old 50000 49591.6104893
new 100000 99999.2172809
old 100000 94841.227219
Note I didn't use time.sleep() because it introduced too much jitter. Also, note that even though this minimal example shows very accurate timing even up to 100khz on my Windows laptop, if you put more code into the loop than there is time to execute, the timing will run correspondingly slow.
Apologies I used Python 2.7 which doesn't have the very convenient time.perf_counter() function - add an extra parameter timefun=time.perf_counter() to each of the calls to timing()
Upvotes: 1