Tveitan
Tveitan

Reputation: 328

Capturing H264 with logitech C920 to OpenCV

I’ve been trying to capture a H264 stream from my two C920 Logitech camera with OpenCV (On a Raspberry Pi 2). I have come to the conclusion that this is not possible because it is not yet implemented. I’ve looked a little in OpenCV/modules/highgui/cap_libv4l.cpp and found that the “Videocapture-function” always convert the pixelformat to BGR24. I tried to change this to h264, but only got a black screen. I guess this is because it is not being decoded the right way.

So I made a workaround using:

(You can find the loopback and rtspserver on github) First I setup a virtual device using v4l2loopback. Then the rtspserver captures in h264 then streams rtsp to my localhost(127.0.0.1). Then I catch it again with gstreamer and pipe it to my virtual v4l2 video device made by loopback using the “v4l2sink” option in gst-launch-0.10. This solution works and I can actually connect to the virtual device with the opencv videocapture and get a full HD picture without overloading the cpu, but this is nowhere near a good enough solution. I get a roughly 3 second delay which is too high for my stereo vision application and it uses a ton of bandwidth.

So I was wondering if anybody knew a way that I could use the v4l2 capture program from Derek Molloys boneCV/capture program (which i know works) to capture in h264 then maybe pipe it to gst-launche-0.10 and then again pipe it to the v4l2sink for my virtual device? (You can find the capture program here: https://github.com/derekmolloy/boneCV)

The gstreamer command I use is:

gst-launch-0.10 rtspsrc location=rtsp://admin:[email protected]:8554/unicast ! decodebin ! v4l2sink device=/dev/video4

OR maybe in fact you know what I would change in the opencv highgui code to be able to capture h264 directly from my device without having to use the virtual device? That would be amazingly awesome!

Here is the links to loopback and the rtspserver that I use:

Sorry about the wierd links I don't have enough reputation yet to poste more links..

Upvotes: 4

Views: 2708

Answers (2)

effgee
effgee

Reputation: 1

check out Derek Molloy on youtube. He's using a Beaglebone, but presumably ticks this box https://www.youtube.com/watch?v=8QouvYMfmQo

Upvotes: 0

ARibeiro
ARibeiro

Reputation: 114

I don't know exactly where you need to change in the OpenCV, but very recently I started to code using video on Raspberry PI.

I'll share my findings with you.

I got this so far:

  • can read the C920 h264 stream directly from the camera using V4L2 API at 30 FPS (if you try to read YUYV buffers the driver has a limit of 10 fps, 5 fps or 2 fps from USB...)
  • can decode the stream to YUV 4:2:0 buffers using the broadcom chip from raspberry using OpenMax IL API

My Work In Progress code is at: GitHub.

Sorry about the code organization. But I think the abstraction I made is more readable than the plain V4L2 or OpenMAX code.

Some code examples:

Reading camera h264 using V4L2 Wrapper:

    device.streamON();
    v4l2_buffer bufferQueue;
    while (!exit_requested){
        //capture code
        device.dequeueBuffer(&bufferQueue);
        // use the h264 buffer inside bufferPtr[bufferQueue.index]
        ...
        device.queueBuffer(bufferQueue.index, &bufferQueue);
    }
    device.streamOFF();

Decoding h264 using OpenMax IL:

     BroadcomVideoDecode decoder;
     while (!exit_requested) {
        //capture code start
        ...
        //decoding code
        decoder.writeH264Buffer(bufferPtr[bufferQueue.index],bufferQueue.bytesused);
        //capture code end
        ...
    }

Upvotes: 2

Related Questions