Reputation: 321
I'm using openCV library with yolov3 and darknet in my project. My app has been written in C++ and it reads rtsp stream and look for a human on the stream. I run it on my Nvidia Jetson Nano and everything is fine but there is one small issue. I have noticeable delay in video analysis. When I run it and I appear in camera view area I can see ~20s lag.
I'm analising substream (720p 2fps) but on recognition I would like to capture the right moment of recognition on the main stream (1080p 15fps) which I record using ffmpg. To do so i need to (1) don't have delay on recognition or (2) measure this delay during recognition to define which second of main video I need to capture. I suppose (1) is not possible.
Do you know if openCV has such an option to display this delay? How can I measure it?
p.s. This delay is not always the same. But I noticed it is from 10 to 20 s,
Thank you a lot for any help ;)
Upvotes: 1
Views: 341
Reputation: 340
It will be hard to sync the stream, as both fps and stream channels are different. And another problem it's the rtsp stream, openCV can skip a lot of frames caused by bootlenecks and you can't get them back.
You may find a answer if you look where is your bootleneck. Probably, as it's a deep learning algorithm, most of the gpu/cpu time will be in the detection algorithm.
What i would do is: Ignore the second stream and focus your code on the main stream, add some frames on a buffer and detect, if your counditions are reached, then you iterate over that buffer to save what you need.
ps: This can cause problems to, due to the time needed to save the buffer on disk. (Maybe create a Thread for this will help)
Upvotes: 2