Reputation: 409
As part of a bigger project, I'm trying to decode a number of HD (1920x1080) video streams simultaneously. Each video stream is stored in raw yuv420p format within an AVI container. I have a Decoder class from which I create a number of objects within different threads (one object per thread). The two main methods in Decoder are decode()
and getNextFrame()
, which I provide the implementation for below.
When I separate the decoding logic and use it to decode a single stream, everything works fine. However, when I use the multi-threaded code, I get a segmentation fault and the program crashes within the processing code in the decoding loop. After some investigation, I realized that the data array of the AVFrame
filled in getNextFrame()
contains addresses which are out of range (according to gdb).
I'm really lost here! I'm not doing anything that would change the contents of the AVFrame
in my code. The only place where I attempt to access the AVFrame is when I call sws_scale()
to convert the color format and that's where the segmentation fault occurs in the second case because of the corrupt AVFrame
. Any suggestion as to why this is happening is greatly appreciated. Thanks in advance.
The decode()
method:
void decode() {
QString filename("video.avi");
AVFormatContext* container = 0;
if (avformat_open_input(&container, filename.toStdString().c_str(), NULL, NULL) < 0) {
fprintf(stderr, "Could not open %s\n", filename.toStdString().c_str());
exit(1);
}
if (avformat_find_stream_info(container, NULL) < 0) {
fprintf(stderr, "Could not find file info..\n");
}
// find a video stream
int stream_id = -1;
for (unsigned int i = 0; i < container->nb_streams; i++) {
if (container->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
stream_id = i;
break;
}
}
if (stream_id == -1) {
fprintf(stderr, "Could not find a video stream..\n");
}
av_dump_format(container, stream_id, filename.toStdString().c_str(), false);
// find the appropriate codec and open it
AVCodecContext* codec_context = container->streams[stream_id]->codec; // Get a pointer to the codec context for the video stream
AVCodec* codec = avcodec_find_decoder(codec_context->codec_id); // Find the decoder for the video stream
if (codec == NULL) {
fprintf(stderr, "Could not find a suitable codec..\n");
return -1; // Codec not found
}
// Inform the codec that we can handle truncated bitstreams -- i.e.,
// bitstreams where frame boundaries can fall in the middle of packets
if (codec->capabilities & CODEC_CAP_TRUNCATED)
codec_context->flags |= CODEC_FLAG_TRUNCATED;
fprintf(stderr, "Codec: %s\n", codec->name);
// open the codec
int ret = avcodec_open2(codec_context, codec, NULL);
if (ret < 0) {
fprintf(stderr, "Could not open the needed codec.. Error: %d\n", ret);
return -1;
}
// allocate video frame
AVFrame *frame = avcodec_alloc_frame(); // deprecated, should use av_frame_alloc() instead
if (!frame) {
fprintf(stderr, "Could not allocate video frame..\n");
return -1;
}
int frameNumber = 0;
// as long as there are remaining frames in the stream
while (getNextFrame(container, codec_context, stream_id, frame)) {
// Processing logic here...
// AVFrame data array contains three addresses which are out of range
}
// freeing resources
av_free(frame);
avcodec_close(codec_context);
avformat_close_input(&container);
}
The getNextFrame()
method:
bool getNextFrame(AVFormatContext *pFormatCtx,
AVCodecContext *pCodecCtx,
int videoStream,
AVFrame *pFrame) {
uint8_t inbuf[INBUF_SIZE + FF_INPUT_BUFFER_PADDING_SIZE];
char buf[1024];
int len;
int got_picture;
AVPacket avpkt;
av_init_packet(&avpkt);
memset(inbuf + INBUF_SIZE, 0, FF_INPUT_BUFFER_PADDING_SIZE);
// read data from bit stream and store it in the AVPacket object
while(av_read_frame(pFormatCtx, &avpkt) >= 0) {
// check the stream index of the read packet to make sure it is a video stream
if(avpkt.stream_index == videoStream) {
// decode the packet and store the decoded content in the AVFrame object and set the flag if we have a complete decoded picture
avcodec_decode_video2(pCodecCtx, pFrame, &got_picture, &avpkt);
// if we have completed decoding an entire picture (frame), return true
if(got_picture) {
av_free_packet(&avpkt);
return true;
}
}
// free the AVPacket object that was allocated by av_read_frame
av_free_packet(&avpkt);
}
return false;
}
The lock management callback function:
static int lock_call_back(void ** mutex, enum AVLockOp op) {
switch (op) {
case AV_LOCK_CREATE:
*mutex = (pthread_mutex_t *) malloc(sizeof(pthread_mutex_t));
pthread_mutex_init((pthread_mutex_t *)(*mutex), NULL);
break;
case AV_LOCK_OBTAIN:
pthread_mutex_lock((pthread_mutex_t *)(*mutex));
break;
case AV_LOCK_RELEASE:
pthread_mutex_unlock((pthread_mutex_t *)(*mutex));
break;
case AV_LOCK_DESTROY:
pthread_mutex_destroy((pthread_mutex_t *)(*mutex));
free(*mutex);
break;
}
return 0;
}
Upvotes: 2
Views: 1756
Reputation: 409
I figured out the cause of the problem. It is the av_free_packet()
call before returning when a decoded frame is obtained. I commented out that call and the program worked! I'm still not quite sure why this affects the filled AVFrame
though.
I'm also not sure if removing that call would cause a memory leak in my code.
Hopefully a libavcodec expert can shed some light on this and explain what I was doing wrong.
Upvotes: 1