Reputation: 139
I'm building a gstreamer-rtsp-server that has a tee (when a client is connected). However, when a client connects, the autovideosink
seems to only show one frame and sticks. Without the tee/autovideosink, it works. Why does it stick/freeze?
RTSP Server launch string: videotestsrc pattern=ball ! videoconvert ! video/x-raw, width=(int)800, height=(int)800, format=(string)I420 ! tee name=t ! x264enc ! rtph264pay name=pay0 pt=96 t. ! autovideosink
Client: gst-launch-1.0 rtspsrc protocols=tcp buffer-mode=1 location=rtsp://127.0.0.1:8554/test latency=0 ! rtph264depay ! avdec_h264 ! queue ! videoconvert ! xvimagesink sync=false
Client output:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://127.0.0.1:8554/test
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
0:00:20.123212962 22872 0x55f4a8b5e2d0 WARN rtspsrc gstrtspsrc.c:5917:gst_rtsp_src_receive_response:<rtspsrc0> error: Could not receive message. (Timeout while waiting for server response)
0:00:20.123472189 22872 0x55f4a8b5e2d0 WARN rtspsrc gstrtspsrc.c:7548:gst_rtspsrc_open:<rtspsrc0> can't get sdp
0:00:20.123525806 22872 0x55f4a8b5e2d0 WARN rtspsrc gstrtspsrc.c:5628:gst_rtspsrc_loop:<rtspsrc0> we are not connected
ERROR: from element /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0: Could not read from resource.
Additional debug info:
gstrtspsrc.c(5917): gst_rtsp_src_receive_response (): /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0:
Could not receive message. (Timeout while waiting for server response)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
Upvotes: 0
Views: 1630
Reputation: 7383
As the documentation notes, always use a queue
after each tee
branch:
https://gstreamer.freedesktop.org/documentation/coreelements/tee.html?gi-language=c
One needs to use separate queue elements (or a multiqueue) in each branch to provide separate threads for each branch. Otherwise a blocked dataflow in one branch would stall the other branches.
In your particular case, you should also add the tune=zerolatency
option to the x264enc
element. That is because the latency of x264enc
by default is higher than the default queue sizes. Alternatively you need to increase the queue sizes to compensate for x264enc
's latency.
Upvotes: 2