Reputation: 81
We are encoding sample frames using NvEncoder with HEVC. Since HEVC frames do not have any timestamps, in order to perform seek operation on the video, we wrote a remuxer in C++ that creates timestamps for the frames in an orderly fashion and writes the encoded frames in a video container (mp4,mov). The output video in mp4 container looks fine when played with ffplay, and timestamps seems correct when checked with ffprobe. However, when we try to play the video in Gstreamer pipeline, 2nd and 3rd frames seem to have the exactly same timestamp. So when the video is played, 3rd frame is skipped and 2nd frame is shown twice. We cannot tolerate any frame loss, so we need to solve this problem, which we think is due to an incompatibility between ffmpeg and gstreamer regarding frame timestamps. I can also provide the source-code of our remuxer and example outputs if that would help.
I used the following Gstreamer pipeline to play the mp4:
gst-launch-1.0 filesrc location=5_fps.mp4 ! qtdemux name=demux demux.video_0 ! queue! decodebin ! videoconvert ! videoscale ! autovideosink
Following command also gives the same mismatching frame timestamps:
ffmpeg -i 5_fps.bin -vcodec copy -acodec copy 5_fps.mp4
Many thanks!
Edit: I am adding the part of remuxer where each frame from the input stream is read and timestamps are added.
int frame_no=-1; //starting with -1 gives the same ffprobe results as command line ffmpeg container conversion, starting with 0 again causes the same timestamp problem
while (1) {
AVStream *in_stream, *out_stream;
_status = av_read_frame(_ifmt_ctx, &_pkt);
if (_status < 0) break;
in_stream = _ifmt_ctx->streams[_pkt.stream_index];
if (_pkt.stream_index >= _stream_mapping_size ||
_stream_mapping[_pkt.stream_index] < 0) {
av_packet_unref(&_pkt);
continue;
}
double inputFPS=av_q2d(in_stream->r_frame_rate);
double outputFPS=av_q2d(in_stream->r_frame_rate);
_pkt.stream_index = _stream_mapping[_pkt.stream_index];
out_stream = _ofmt_ctx->streams[_pkt.stream_index];
_pkt.pts=frame_no*in_stream->time_base.den/inputFPS;
_pkt.dts=_pkt.pts;
_pkt.duration = in_stream->time_base.den/inputFPS;
_pkt.pos = -1;
std::cout<<"rescaled pts: "<<_pkt.pts<<" dts: "<<_pkt.dts<<" frame no: "<< frame_no<<std::endl;
std::cout<<"input time_base den: "<<in_stream->time_base.den<<" output time_base den: "<<out_stream->time_base.den<<std::endl;
frame_no++;
_status = av_interleaved_write_frame(_ofmt_ctx, &_pkt);
if (_status < 0) {
cout<<"Error muxing packet\n";
break;
}
av_packet_unref(&_pkt);}
I first tried this method where each frame timestamp (pts and dts) is incremented by packet duration. At first I thought this method would not work since B-frames are decoded in different order, so I first tried videos with no b-frames. However, when I tried with videos with b-frames, it again worked. I thought decoded frames would be in a different order, however that was not the case. The only issue is that only the second and third frames appear to have the same timestamps in Gstream (not in Ffmpeg), other than these two frames remaining video plays just fine. Overall, I am also confused that b-frames do not cause any frame order problem.
Example encoded input, example output video if you want to examine the frames. (I don't know if it's okay to share files over google drive, please correct me if there is a better way to share, or not.)
Upvotes: 2
Views: 939
Reputation: 81
As @AlanBirtles mentioned, assigning timestamps of b-frames in my naive way is not correct at all. I assumed that since the video was playable, timestamp are somehow corrected by Ffmpeg or Gstreamer, and I did not relate my main problem to this. However, when I tried to convert container of a video with no b-frames, the problem of 3rd frame being lost is solved. So I understand that either set timestamps of b-frames correctly, or I should use videos without b-frames. Even though it is not the viable solution, for the time being we will not use b-frames, however in the future I will try to reimplement the remuxer so that any video is remuxed OK.
Upvotes: 2