caiomcg
caiomcg

Reputation: 523

FFmpeg video from file to network

I am currently working with FFmpeg in a video streaming solution over RTP. What bugs me is how to stream at a correct framerate. Currently I am using a sleep in the main streaming loop:

while (av_read_frame (input_format_ctx, &packet) >= 0) { // Read a video packet as long as EOF is not reached or an error occurr
        if(packet.stream_index == 0) {
            if (av_interleaved_write_frame(output_format_ctx, &packet) < 0) { // Write the packet to the output stream
                av_packet_unref(&packet); // Wipe the packet
                av_free_packet(&packet); // Release the packet and finishes if a problem occured
                break; 
            }
            av_packet_unref(&packet); // Wipe the packet
            av_free_packet(&packet); // Release the packet
        }

        current_time = getSystemTime(); // Get system time
        adaptativeSleep ((last_time + 1/(output_framerate)) - current_time); // Sleep according to framerate
        last_time = getSystemTime(); // Get finish 
    }

Is there a correct way of letting FFmpeg handle the framerate?

Thanks in advance

P.S.: I only remux the stream, as a result FFmpeg runs through the file in a few seconds witought my "sleep".

Upvotes: 1

Views: 728

Answers (1)

WLGfx
WLGfx

Reputation: 1179

The AVCodecContext.pkt_timebase and AVCodecContext.time_base is what you will need to keep the stream running smoothly. Each stream has its own pkt_timebase. When broadcasting, use one stream to sync to the system clock. Use the AVPacket.pts and FFMpegs function av_q2d() to convert the pts value to seconds as a double value.

double time_base = av_q2d(ctx->pkt_timebase);
int64_t time_stamp = packet->pts;
double packet_time = time_base * time_stamp; // time stamp in seconds

From there sync the first streams packet to the system clock. The pauses between sending the following packets will be easier to handle.

The following function is nothing to do with streaming but does illustrate syncing with a system clock.

FFVideoFrame *FFMpeg::getVideoFrame(double r_time) { // r_time is current system time
    FFVideoFrame *vframe = nullptr;

    double timestamp;
    double clock_current_time = r_time - sys_clock_start;
    double clock_current_frame = 0.0;

    mutex_vid_queues.lock();

    int vid_queue_size = vid_queue.size();

    if (playback_started) {
        if (vid_queue_size > MAX_VID_OVERRUN) { // keep the buffer queues low
            playback_clock_start += 0.75;
            LOGD("TIMESHIFT buffers over MAX_VID_OVERRUN");
        } else if (vid_queue_size < 20) {
            playback_clock_start -= 0.4; // resync
            LOGD("TIMESHIFT buffers below 20");
        }
    }

    if (!playback_started) {
        // queued video buffers need to be at least 1 second for network stream

        if (((vid_queue_size > MIN_VID_FRAMES_START_NON_NETWORK && !isNetworkStream)
                || (vid_queue_size > MIN_VID_FRAMES_START && isNetworkStream))
                ) {

            playback_started = true;

            vframe = vid_queue.front();
            vid_queue.pop_front();

            playback_clock_start = vframe->timestamp_f; // set stream start time from time stamp

            sys_clock_start = r_time; // set system start time from current time
        }
    } else {
        bool in_bounds = true;
        FFVideoFrame *frame_temp;
        int drop_count = 0;

        while (in_bounds) {
            if (vid_queue_size == 0) {
                in_bounds = false;
            } else {
                frame_temp = vid_queue.front();

                timestamp = frame_temp->timestamp_f;
                clock_current_frame = timestamp - playback_clock_start;

                if (clock_current_frame > clock_current_time) {
                    in_bounds = false;
                } else {

                    // adds 0.xx of a second tolerance for video playback REMOVED FROM HERE
                    // this may get increased at some point

                    vid_queue.pop_front();
                    vid_queue_size--;

                    if (isNetworkStream
                        && fabs(clock_current_time - clock_current_frame) < 0.05) {
                        in_bounds = false;
                    }

                    if (vframe) {
                        vid_unused.push_back(vframe);
                        drop_count++;
                    }

                    vframe = frame_temp;
                }
            }
        }

        // if rendering takes too long then dropped frames will occur

        if (drop_count > 0) LOGD("Dumped %d video frames", drop_count);
    }

    mutex_vid_queues.unlock();

    if (vframe) clock_last_frame = (int64_t)timestamp;

    return vframe;
}

Upvotes: 3

Related Questions