kalleknast
kalleknast

Reputation: 311

GStreamer timestamps (PTS) are not monotonically increasing for captured frames

I wrote some code to capture frames from a webcam using GStreamer 1.0 (PyGST). It is important for me to know the exact time of capture. For this, I set the v4l2src property do-timestamp, and I use appsink to write the buffer PTS to a text file.

However, the timestamps are not monotonically increasing. E.g. the timestamp of frame 16 is 0.88199 s, and the timstamp of frame 17 is 0.77462 s, i.e. 0.10737 s EARLIER than previous frame. (I have a figure showing the problem, but lack the reputation necessary to post it.)

Is it correct that PTS of captured GstBuffers are not always monotonically increasing? If this is not normal behavior, does anyone know what I messed up?

I use a Logitech c920 webcam. The frames are h.264 encoded on the camera. The code looks roughly like this:

import gi
gi.require_version('Gst', '1.0')
from gi.repository import GObject, Gst, Gtk
GObject.threads_init()
Gst.init(None)

class Webcam:
def __init__(self, video_dev='/dev/video0', fps=30):

    ts_log_fname = 'webcam_timestamps.log'
    vid_fname = 'webcam.mkv'
    self.ts_log = open(ts_log_fname, 'w')
    self.ts_log.write('video filename: %s\n '
                      '\nframe_number, cam_running_ts\n' % vid_fname)

    self.n_frames = 0

    # Create GStreamer pipline
    self.pipeline = Gst.Pipeline()

    # Create bus to get events from GStreamer pipeline
    self.bus = self.pipeline.get_bus()
    self.bus.add_signal_watch()
    self.bus.connect('message::error', self.on_error)
    self.bus.enable_sync_message_emission()
    self.bus.connect('sync-message::element', self.on_sync_message)

    ###########################
    # Callable function
    ###########################
    def on_new_sample(appsink):
        """
        Function called from the pipeline by appsink.
        Writes the timestampes of frame capture to a log file.
        """
        # Get the buffer
        smp = appsink.emit('pull-sample')
        buf = smp.get_buffer()
        # Nanoseconds to seconds
        timestamp = np.float64(1e-9) * buf.pts
        self.n_frames += 1
        self.ts_log.write('%d,%0.9f\n' % (self.n_frames, timestamp))
        return False

    ###########################
    # Create GStreamer elements
    ###########################
    # Video source:
    self.v4l2src0 = Gst.ElementFactory.make('v4l2src', None)
    self.v4l2src0.set_property('device', video_dev)
    self.v4l2src0.set_property('do-timestamp', 'true')
    # Video source filters:
    vid0caps = Gst.Caps.from_string('video/x-h264,width=%d,height=%d,'
                                    'framerate=%d/1' % (1280, 720, fps))
    self.vid0filter = Gst.ElementFactory.make('capsfilter', None)
    self.vid0filter.set_property('caps', vid0caps)
    # Parse video:
    self.vid0parse = Gst.ElementFactory.make('h264parse', None)
    # Split:
    self.tee0 = Gst.ElementFactory.make('tee', None)
    self.tee0.set_property('name', 't0')
    ####
    # Display branch
    ####
    # Decode
    self.vid0decode = Gst.ElementFactory.make('avdec_h264', None)
    # Scale to display size:
    self.disp0scale = Gst.ElementFactory.make('videoscale', None)
    # Display filter caps:
    disp0caps = Gst.Caps.from_string('video/x-raw,width=%d,height=%d' %
                                     (800, 600))
    # Sinks:
    self.disp0sink = Gst.ElementFactory.make('autovideosink', None)
    self.disp0sink.set_property('filter-caps', disp0caps)
    ####
    # File branch
    ####
    self.mux = Gst.ElementFactory.make('matroskamux', None)
    self.file0sink = Gst.ElementFactory.make('filesink', None)
    self.file0sink.set_property('location', vid_fname)
    self.file0sink.set_property('sync', False)
    ####
    # Timestamp branch
    ####
    # Create appsink
    self.ts0sink = Gst.ElementFactory.make('appsink', None)
    # Setting properties of appsink
    ts0caps = vid0caps  # use same caps as for camera
    self.ts0sink.set_property('caps', ts0caps)
    self.ts0sink.set_property("max-buffers", 20)  # Limit memory usage
    # Tell sink to emit signals
    self.ts0sink.set_property('emit-signals', True)
    self.ts0sink.set_property('sync', False)  # No sync
    # Connect appsink to my function (writing timestamps)
    self.ts0sink.connect('new-sample', on_new_sample)

    self.queue0 = Gst.ElementFactory.make('queue', None)
    self.queue1 = Gst.ElementFactory.make('queue', None)                
    self.disp_queue = Gst.ElementFactory.make('queue', None)
    self.file_queue = Gst.ElementFactory.make('queue', None)
    self.ts_queue = Gst.ElementFactory.make('queue', None)

    # Add elements to the pipeline
    self.pipeline.add(self.v4l2src0)
    self.pipeline.add(self.vid0filter)
    self.pipeline.add(self.vid0parse)
    self.pipeline.add(self.tee0)
    self.pipeline.add(self.vid0decode)
    self.pipeline.add(self.disp0scale)
    self.pipeline.add(self.disp0sink)
    self.pipeline.add(self.mux)
    self.pipeline.add(self.file0sink)
    self.pipeline.add(self.ts0sink)
    self.pipeline.add(self.queue0)
    self.pipeline.add(self.queue1)
    self.pipeline.add(self.disp_queue)
    self.pipeline.add(self.file_queue)
    self.pipeline.add(self.ts_queue)

    ###############
    # Link elements
    ###############
    # video source
    if not self.v4l2src0.link(self.vid0filter):
        print('video source to video filter link failed')          
    if not self.vid0filter.link(self.vid0parse):
        print('video filter to video parse link failed')
    if not self.vid0parse.link(self.tee0):
        print('video parse to tee link failed')    
    # tee
    if not self.tee0.link(self.disp_queue):
        print('tee to display queue link failed')
    if not self.tee0.link(self.file_queue):
        print('tee to file queue link failed')
    if not self.tee0.link(self.ts_queue):
        print('tee to ts queue link failed')
    # video display sink
    if not self.disp_queue.link(self.vid0decode):
        print('dispaly queue to video decode link failed')
    if not self.vid0decode.link(self.disp0scale):
        print('decode to videoscale link failed')
    if not self.disp0scale.link(self.queue0):
        print('disp0scale to queue0 link failed')            
    if not self.queue0.link_filtered(self.disp0sink, disp0caps):
        print('queue0 to display-sink link failed')
    # file sink
    if not self.file_queue.link(self.mux):
        print('file queue to mux link failed')           
    if not self.mux.link(self.queue1):
        print('mux to queue1 link failed')            
    if not self.queue1.link(self.file0sink):
        print('queue1 to file-sink link failed')
    # timestamp sink
    if not self.ts_queue.link(self.ts0sink):
        print('ts queue to ts-sink link failed')

def run(self):
    self.offset_t = datetime.now().timestamp() - self.t_start
    self.pipeline.set_state(Gst.State.PLAYING)

def quit(self):
    self.pipeline.set_state(Gst.State.NULL)
    self.ts_log.close()

def on_sync_message(self, bus, msg):
    if msg.get_structure().get_name() == 'prepare-window-handle':
        msg.src.set_property('force-aspect-ratio', True)

def on_error(self, bus, msg):
    print('on_error():', msg.parse_error())

Upvotes: 5

Views: 14564

Answers (4)

Arkadiy Bolotov
Arkadiy Bolotov

Reputation: 357

In case someone is trying to fix PTS timestamps on an incoming stream from a RTSP camera (30 FPS), here is the code which will allow you to alter PTS on the fly. It took me 2 days to figure this out, hope it will save some time for others.

UPDATE Aug 2023: There is now a more elegant solution (https://stackoverflow.com/a/76861526/1708124)

#!/usr/bin/python3

import sys
import logging
import numpy as np

import gi
gi.require_version("Gst", "1.0")
gi.require_version("GstApp", "1.0")
from gi.repository import Gst, GstApp, GObject, GLib

Gst.init(None)

last_pts = 0

class Main:
    def __init__(self):

        self.signed = None
        self.depth = None
        self.rate = None
        self.channels = None

        self.pipeline = Gst.parse_launch("rtspsrc latency=2000 location=rtspt://guest:[email protected]:5540/Streaming/Channels/101 ! rtph264depay ! h264parse ! avdec_h264 ! identity name=adjust ! queue2 ! watchdog timeout=10000 ! autovideosink")

        src_element = self.pipeline.get_by_name('adjust')
        #get the static source pad of the element
        srcpad = src_element.get_static_pad('src')
        #add the probe to the pad obtained in previous solution
        probeID = srcpad.add_probe(Gst.PadProbeType.BUFFER, self.adjust_pts)
            
        # The MainLoop
        self.mainloop = GLib.MainLoop()

        # And off we go!
        self.pipeline.set_state(Gst.State.PLAYING)
        self.mainloop.run()


    def adjust_pts(self, pad, info):
        global last_pts

        # 33333333 nanoseconds is 0.03 seconds per frame (at 30 FPS)
        new_pts = last_pts + 33333333

        info.get_buffer().pts = new_pts

        last_pts = new_pts

        return Gst.PadProbeReturn.OK



    def on_eos(self, bus, msg):
        logging.debug('on_eos')
        self.mainloop.quit()

    def on_error(self, bus, msg):
        error = msg.parse_error()
        print('on_error:', error[1])
        self.mainloop.quit()



class Error(Exception):

    def __init__(self, message, detail=None):
        global last_detail

        self.message = message
        if detail:
            self.detail = detail
        else:
            self.detail = last_detail

        logging.debug('audio: Error %s %s', self.message, self.detail)

    def __str__(self):
        return '%s - %s' % (self.message, self.detail)


Main()

Upvotes: 2

Arkadiy Bolotov
Arkadiy Bolotov

Reputation: 357

I spent 2 years approaching this problem from different directions. I ended up solving it by:

  1. Measuring the actual FPS of a RTSP stream (I posted the code here - https://github.com/ethaniel/rtsp-fps). (run this script for a minute or so, until the fps value stabilizes)
  2. Using videorate to set the framerate based on the value provided by the script above:
gst-launch-1.0 rtspsrc location=rtspt://admin:[email protected]/Streaming/Channels/101 ! \ 
rtph264depay ! h264parse ! avdec_h264 ! \
videorate ! video/x-raw,framerate=30039/1000 ! \
watchdog timeout=10000 ! autovideosink

So what this whole thing does, is that even if the PTS is not monotonically increasing between some frames, gstreamer can fix it based on the average value that you have calculated.

Upvotes: 2

penguin
penguin

Reputation: 846

I know this is a bit late kalleknast, have you ever managed to get the timestamp? I used do-timestamp=True for rtspsrc but it didn't work.

Upvotes: 0

kalleknast
kalleknast

Reputation: 311

As suggested by mpr, the non-monotonic timestamps occur when streaming h.264. It also seems to occur just in the beginning of the stream. The problem disappears when streaming raw video.black line and red dots shows the time difference between consecutive timestamps in seconds. Grey horizontal line shows 1/15, i.e. the expected time difference.

Same as above, but timestamps from a raw stream instead

The big, 300 ms, jumps every 30 frames or so seems to be something specific to the Logitech C920 camera. When I use an onboard laptop camera, the jumps are smaller, 130 ms, and fewer, every 85 frames or so.

Upvotes: 1

Related Questions