Reputation: 25
I am currently implementing an algorithm which synchronizes a playback and record functionality using python. This algorithm will be used to measure time lag or delays between a microphone and a speaker, therefore the timing has to be very accurate, in terms of internal latency and software execution time. I was able to achieve the synchronization of these functions in python using the threading module. But, I am unsure if this is the best options, giving that other modules such as multithreading is out there. Giving my lack of expertise and experience with multi threading (async/sync) modules in python, I only implemented the basics of the threading module as shown below in my script. Also, I know that there is lock functionality to it, but would that be useful giving my application?
As I mentioned before, accurate timing is crucial for my application. I am trying to time-stamp the instance that both the recording and playback functions are executed, thus far I have simply call time.time() before data/samples are feed into each buffer. But I have come to find that time.clock() and time.process_time() might give me a more accurate time-stamp. But I am pretty sure that there might be even better solutions out there.
#!/usr/bin/env python3
import pyaudio
import numpy as np
import time
import glob
import wave
import threading
rec_start=0.0
play_start=0.0
rec_signal = np.array([],dtype=np.float64)
def record():
RATE = 16000
DURATION = 0.5
CHUNKSIZE_REC = 2**12
global rec_signal
global rec_start
#initialize portaudio
p_rec = pyaudio.PyAudio()
stream = p_rec.open(format=pyaudio.paInt16,
channels=1,rate=RATE,
input=True,frames_per_buffer=CHUNKSIZE_REC)
frames = []
rec_start = time.time()
for _ in range(0,int(RATE/CHUNKSIZE_REC*DURATION)):
data = stream.read(CHUNKSIZE_REC)
frames.append(np.fromstring(data,dtype=np.int16))
#convert the list of numpy-arrays into a 1D array (column-wise)
numpydata = np.hstack(frames)
#close stream
stream.stop_stream()
stream.close()
p_rec.terminate()
rec_signal = numpydata
#end def
#-----------------------------------------------------------------------------
def playback():
CHUNKSIZE_PLAY = 2**12
global play_start
wf =wave.open('AC_PRN_Signal.wav','rb')
p_play = pyaudio.PyAudio()
stream = p_play.open(format=p_play.get_format_from_width(wf.getsampwidth()),
channels=wf.getnchannels(),
rate=wf.getframerate(),
output=True)
playData = wf.readframes(CHUNKSIZE_PLAY)
time.sleep(0.005)
play_start =time.time()
while playData !='':
stream.write(playData)
playData = wf.readframes(CHUNKSIZE_PLAY)
stream.stop_stream()
stream.close()
p_play.terminate()
#end def
#-----------------------------------------------------------------------
def play_while_recording():
rec_thread = threading.Thread(target=record)
play_thread = threading.Thread(target=playback)
'''start recording while playing back signal'''
rec_thread.start()
play_thread.start()
'''stop both threads before exiting func.'''
play_thread.join()
rec_thread.join()
#end def
#---------------------------------------------------------------------
if __name__ == "__main__":
play_while_recording()
global rec_signal
global rec_start
global play_start
print("rec start @ "+str(rec_start))
print("play start @ "+str(play_start))
print("time_delta: "+str((play_start-rec_start)*1000.0)+"ms")
To add, I have also implemented and tested with subproces module in python to call linux alsa aplay and arecord,as shown below, and then read those waves files using scipy.io.wavefile and carry out the post processing. But I have found it really hard to obtain the time instance of execution or even-time stamp it.
def playback():
global play_start
time.sleep(0.005)
play_start =time.time()
subprocess.Popen(["/usr/bin/aplay","test_audio.wav"])
#end def
Upvotes: 0
Views: 1552
Reputation: 51
You can find way too many debates regarding timestamp execution. I can share my 2 cents shortly :
If you you want to profile your code you can use cProfile that also enable you to choose which clock to use, wall clock time or process time
Upvotes: 1