Reputation: 503
Currently, I am working on a new feature for my software using the Libav API. I was able to merge a video file with and audio file, the output is an MP4 file and the source code works perfectly.
Right now, I have a new requirement: the start time for the video and the audio streams are the same. Both start at 00:00. My next challenge is to add an option to shift the start time for the audio stream based on a variable. For example, if audio start variable is equal to 10 seconds, then the audio stream should be played at time 00:10.
So I wonder, what is the best approach to implement this feature? Should I insert silent packets before the audio start time is reached? Or should I modify the timestamp information for every audio stream packet before it is written into the output container?
Here is the piece of code I use to write the audio stream into the MP4 file. Right now it works like a charm, but I guess this is the place where I should implement the new requirement. I will appreciate any suggestion.
while (1) {
AVStream *in_stream;
int ret = av_read_frame(audioInputFormatContext, pkt);
if (ret < 0)
break;
in_stream = audioInputFormatContext->streams[pkt->stream_index];
pkt->stream_index = outIndex;
AVRational out_time_base = audioStreamList.at(i)->time_base;
// copy packet
pkt->pts = av_rescale_q_rnd(pkt->pts, in_stream->time_base, audioStreamList.at(i)->time_base,
static_cast<AVRounding>(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
pkt->dts = av_rescale_q_rnd(pkt->dts, in_stream->time_base, audioStreamList.at(i)->time_base,
static_cast<AVRounding>(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
pkt->duration = av_rescale_q(pkt->duration, in_stream->time_base, audioStreamList.at(i)->time_base);
pkt->pos = -1;
ret = av_interleaved_write_frame(formatContext, pkt);
if (ret < 0) {
fprintf(stderr, "Error muxing packet\n");
break;
}
av_packet_unref(pkt);
}
Upvotes: 0
Views: 291
Reputation: 503
The best way to modify the start time of any audio stream using the Libav API is by implementing a "adelay" filter. Here is a little piece of code showing how to use it:
AVFilterContext *adelay_ctx;
const AVFilter *adelay;
char args[512]; // This variable contains the filter parameters
int error;
adelay = avfilter_get_by_name("adelay");
if (!adelay) {
av_log(NULL, AV_LOG_ERROR, "Could not find the adelay filter.\n");
return AVERROR_FILTER_NOT_FOUND;
}
int delay_time = 8000; // delay in milliseconds
snprintf(args, sizeof(args), "delays=%d:all=1", delay_time);
error = avfilter_graph_create_filter(&adelay_ctx, adelay, "adelay", args,
NULL, filter_graph);
if (error < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create audio adelay filter\n");
return error;
}
Upvotes: 0