Reputation: 11
Platform: Jetson Nano B01, OS: Ubuntu 18.04, Camera module: Raspi cam v2.1 IMX219 (CSI interface)
Problem overview: My team is developing a machine vision application that requires recording video at high fps (>=120hz) and doing live inference on the same video at low fps (~2hz). Is there a Gstreamer element we could use that could pull out a frame from the pipeline at set intervals and save it to disk?
Current Gstreamer pipeline: gst-launch-1.0 nvarguscamerasrc num-buffers=-1 gainrange="1 1" ispdigitalgainrange="2 2" ! 'video/x-raw(memory:NVMM),width=1280, height=720, framerate=120/1, format=NV12' ! omxh264enc ! qtmux ! filesink location=test1.mp4 -e
Additional info: The idea is that we have a function looping continuously checking for a new image file at a specific location, and when it detects a new image file it will send this to the neural net for inference and delete the image file. We were able to achieve moderate success at this task using a multi-threaded approach to recording with OpenCV on x86 machines, but the Jetson Nano doesn't have enough cpu power to meet our needs with OpenCV, afaik.
The pipeline provided above is able to record videos that meet our required specs, but does not save images to be used for inference.
Upvotes: 1
Views: 919
Reputation: 76
when it detects a new image file it will send this to the neural net for inference and delete the image file.
You do not need to save image files and run the two parts in different processes for the saved image files.
If you want to keep the current structure, try "videorate" and "appsink" elements. E.g., instead of filesink and read-file from the app, inside the app,
... ! videorate ... ! appsink
And then, receive the incoming data directly in the app.
Or, if you want to do "inferences" with conventional neural network frameworks (e.g., tensorflow, caffe2, openVINO, and so on) and get the final resuls from your app:
You can merge the whole procedure into a single GStreamer pipeline along with videorate. "tensor-filter" GStreamer filter (https://github.com/nnstreamer/nnstreamer) allows you to apply conventional AI frameworks inside a GStreamer pipeline directly.
E.g.,
... ! videorate ... ! video/x-raw,... ! tensor-converter ! tensor-transform (if you need some arithmetic operations, transpose, normalization, or something else for your neural network) ! tensor-filter framework=openvino model=PATH_TO_YOUR_MODELFILE ! ... (do whatever you want)
If you want to separate threads, you may add "queue" accordingly.
Upvotes: 1
Reputation: 7383
It is hard to say at what stage in the pipeline you want this to happen. But I'd say take a look at the videorate
element. That element can drop frames to match downstream framerate caps.
https://gstreamer.freedesktop.org/documentation/videorate/index.html?gi-language=c
Upvotes: 0