Bozkurthan
Bozkurthan

Reputation: 141

Cannot receive gstreamer UDP Stream from OpenCV Gstreamer

I'm working on Gazebo Sim which uses 'Gstreamer Plugin' to stream Camera Video via UDP. Simulation is started on Ubuntu 18.04.

There is some resources to understand backend of this structer. Gazebo Simulation PX4 Guide

And they mentions how to create pipeline:

The video from Gazebo should then display in QGroundControl just as it would from a real camera.

It is also possible to view the video using the Gstreamer Pipeline. Simply enter the following terminal command:

gst-launch-1.0  -v udpsrc port=5600 caps='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264' \
! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink fps-update-interval=1000 sync=false

And it works well on terminal. I read these questions:

using gstreamer with python opencv to capture live stream?

Write in Gstreamer pipeline from opencv in python

So then, I tried to implement this pipeline into opencv by using following lines:

video = cv2.VideoCapture('udpsrc port=5600 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264" ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink fps-update-interval=1000 sync=false', cv2.CAP_GSTREAMER)
    #video.set(cv2.CAP_PROP_BUFFERSIZE,3)
    # Exit if video not opened.
    if not video.isOpened():
        print("Could not open video")
        sys.exit()

    # Read first frame.
    ok, frame = video.read()
    if not ok:
        print('Cannot read video file')
        sys.exit()

But it's only giving error:

Could not open video

And I tried different variations of this pipeline in opencv but None of them helped me.

Upvotes: 3

Views: 6624

Answers (4)

Kaktusowy500
Kaktusowy500

Reputation: 11

Check if your OpenCV has Gstreamer support by

print(cv2.getBuildInformation())

There should be

Video I/O:
    FFMPEG:                      YES
      avcodec:                   YES (58.54.100)
      avformat:                  YES (58.29.100)
      avutil:                    YES (56.31.100)
      swscale:                   YES (5.5.100)
      avresample:                YES (4.0.0)
    GStreamer:                   YES (1.16.2)
    v4l/v4l2:                    YES (linux/videodev2.h)
    Intel Media SDK:             YES (/mnt/nfs/msdk/lin-18.4.1/lib64/libmfx.so)

Upvotes: 1

Bozkurthan
Bozkurthan

Reputation: 141

Following code works without errors:

# Read video
video = cv2.VideoCapture("udpsrc port=5600 ! application/x-rtp,payload=96,encoding-name=H264 ! rtpjitterbuffer mode=1 ! rtph264depay ! h264parse ! decodebin ! videoconvert ! appsink", cv2.CAP_GSTREAMER);

I think decode option wasn't true.

Upvotes: 2

codeCat
codeCat

Reputation: 63

I tried this code but did not work. I also tried a different pipeline. Below are my terminal pipelines:

Sender:

gst-launch-1.0 -v realsensesrc serial=$rs_serial timestamp-mode=clock_all enable-color=true ! rgbddemux name=demux demux.src_depth ! queue ! colorizer near-cut=300 far-cut=3000 ! rtpvrawpay ! udpsink host=192.168.100.80 port=9001

Receiver:

gst-launch-1.0 udpsrc port=9001 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)RGB, depth=(string)8, width=(string)1280, height=(string)720, payload=(int)96" ! rtpvrawdepay ! videoconvert ! queue ! fpsdisplaysink sync=false

I can see the video using the above terminal pipeline in the receiver. BUT when I converted it to python code, the output is:

Could not open Video

gst_receiver.py

import cv2
import sys

video = cv2.VideoCapture(
    'udpsrc port=9001 ! application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW,'
    'sampling=(string)RGB, depth=(string)8, width=(string)1280, height=(string)720, payload=(int)96'
    ' ! rtpvrawdepay ! decodebin ! videoconvert ! queue ! appsink', cv2.CAP_GSTREAMER)

# video.set(cv2.CAP_PROP_BUFFERSIZE,3)
# Exit if video not opened.
if not video.isOpened():
    print("Could not open Video")
    sys.exit()

# Read first frame.
ok, frame = video.read()
if not ok:
    print('Cannot read Video file')
    sys.exit()

System:

Sender-PC = Ubuntu 18.04
Receiver-PC = Windows 10
Python = 3.7.9
OpenCV = 4.5.5 

Upvotes: 0

Velovix
Velovix

Reputation: 567

Currently, your pipeline provides no way for OpenCV to extract decoded video frames from the pipeline. This is because all frames go to the autovideosink element at the end which takes care of displaying the frames on-screen. Instead, you should use the appsink element, which is made specifically to allow applications to receive video frames from the pipeline.

video = cv2.VideoCapture(
    'udpsrc port=5600 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264"'
    ' ! rtph264depay'
    ' ! avdec_h264'
    ' ! videoconvert'
    ' ! appsink', cv2.CAP_GSTREAMER)

Upvotes: 2

Related Questions