Jesus
Jesus

Reputation: 1

OpenCV capture gray scale image with videocapture card API

I am developing a Qt application where I have to capture a video image from different video capture cards (different versions) for a project in my work.

I've captured a few cards successfully, using OpenCV and DirectShow drivers (thinking as a standard method), I can read images in a cv::Mat and, then, convert them in a QImage. Then I emit a signal with the QImage ready and, the MainWindow, receives this signal a paint the captured image in a QLabel (like many examples that I saw here :P).

But, now, I need to capture images from a card with a custom manufacter API without DirectShow.

In summary: With the API, you can assign a Windows Handle (WHND) associated with a component (a widget for example) and register a callback when the driver receives a captured image, rendering the images and paint them into the associated handle. The event invoked for rendering and painting is:

int CALLBACK OnVideoRawStreamCallback( BYTE* buffer, ULONG bufLen, unsigned __int64 timeStamp, void* context, ULONG channel, ULONG boardID, ULONG productID );

Then, it calls a ret = AvVideoRender( handle, buffer, bufLen );, where it render the image and paint into the handle.

Well, I'm trying to replace that "AvVideoRender" for a OpenCv conversion. I think converting the BYTES* received in a cv::Mat and, then, convert this cv::Mat in a QImage it could works, right?.

The problem is that I can't get a color image... only gray scale. I mean, if I do this:

int CALLBACK OnVideoRawStreamCallback( BYTE* buffer, ULONG bufLen, unsigned __int64 timeStamp, void* context, ULONG channel, ULONG boardID, ULONG productID )
{
    // Original syntax
    //ret = AvVideoRender( m_videoRender[0], buffer, bufLen );    

    // => New
    // Converting in a OpenCV Matrix
    cv::Mat mMatFrame(IMAGE_HEIGHT, IMAGE_WIDTH,  CV_8U , buffer);

    // Converting cv::Mat in a QImage
    QImage qVideoCam((uchar*)mMatFrame.data, mMatFrame.cols, mMatFrame.rows, mMatFrame.step, QImage::Format_Indexed8);
    // Emit a SIGNAL with the new QImage
    emit imageReady(qVideoCam);
}

It works correctly and I can see the video capture... but in grayscale color.

I think I have to convert cv::Mat with CV_8UC3 instead CV_8U... but I have an unhandle exception when the application tries to convert the cv::Mat to a QImage. Here's my sample code trying convert it in a color image:

int CALLBACK OnVideoRawStreamCallback( BYTE* buffer, ULONG bufLen, unsigned __int64 timeStamp, void* context, ULONG channel, ULONG boardID, ULONG productID )
{
    // Original syntax
    //ret = AvVideoRender( m_videoRender[0], buffer, bufLen );    

    // => New
    // Converting in a OpenCV Matrix
    cv::Mat mMatFrame(IMAGE_HEIGHT, IMAGE_WIDTH,  CV_8UC3 , buffer);

    // Converting cv::Mat in a QImage
    QImage qVideoCam((uchar*)mMatFrame.data, mMatFrame.cols, mMatFrame.rows, mMatFrame.step, QImage::Format_RGB888);
    // Emit a SIGNAL with the new QImage
    emit imageReady(qVideoCam);
}

The video specs are the following:

So, I would like to know if, with this parameters I can convert the BYTES* in a colored image. I think it's possible.. I'm sure that I'm doing something wrong...but I don't know what :S

I've tested with the original AvVideoRender and I can see color video into the QLabel...so, I can know that I'm receiveng color images. But with this solution I have some problems related to my project (for example, isn't general solution) and I think that I have no control with the handle (I can't get the Pixmap and scale it keeping the aspect ratio, for example)

Thanks for reading and sorry the inconveniences!

Upvotes: 0

Views: 957

Answers (1)

Jesus
Jesus

Reputation: 1

I get the solution :). Finally, I had to convert an YV12 array into an three dimensions RGB array. I don't know why, but, the cv::cvtColor conversions didn't work for me (I tried many combinations).

I found this yv12 to rgb conversion (the yv12torgb function):

http://sourcecodebrowser.com/codeine/1.0/capture_frame_8cpp_source.html

And made some modifications to get a cv::Mat as return value (insted of an unsigned char*). Here is the solution:

cv::Mat yv12ToRgb( uchar *pBuffer, const int w, const int h )
{
    #define clip_8_bit(val)       \
            {                       \
                if( val < 0 )        \
                val = 0;          \
                else if( val > 255 ) \
                val = 255;        \
            }

    cv::Mat result(h, w, CV_8UC3);
    long ySize=w*h;
    long uSize;
    uSize=ySize>>2;

    uchar *output=result.data;
    uchar *pY=pBuffer;
    uchar *pU=pY+ySize;
    uchar *pV=pU+uSize;

    int y, u, v;
    int r, g, b;    

    int sub_i_uv;
    int sub_j_uv;

    const int uv_width  = w / 2;
    const int uv_height = h / 2;

    uchar * const rgb = new uchar[(w * h * 4)]; //qt needs a 32bit align
    if( !rgb )
        return  result;

    for( int i = 0; i < h; ++i ) {
        // calculate u & v rows
        sub_i_uv = ((i * uv_height) / h);

        for( int j = 0; j < w; ++j ) {
            // calculate u & v columns
            sub_j_uv = (j * uv_width) / w;

            /***************************************************
            *
            *  Colour conversion from
            *    http://www.inforamp.net/~poynton/notes/colour_and_gamma/ColorFAQ.html#RTFToC30
            *
            *  Thanks to Billy Biggs <[email protected]>
            *  for the pointer and the following conversion.
            *
            *   R' = [ 1.1644         0    1.5960 ]   ([ Y' ]   [  16 ])
            *   G' = [ 1.1644   -0.3918   -0.8130 ] * ([ Cb ] - [ 128 ])
            *   B' = [ 1.1644    2.0172         0 ]   ([ Cr ]   [ 128 ])
            *
            *  Where in xine the above values are represented as
            *
            *   Y' == image->y
            *   Cb == image->u
            *   Cr == image->v
            *
            ***************************************************/

            y = pY[(i * w) + j] - 16;
            u = pU[(sub_i_uv * uv_width) + sub_j_uv] - 128;
            v = pV[(sub_i_uv * uv_width) + sub_j_uv] - 128;

            r = (int)((1.1644 * (double)y) + (1.5960 * (double)v));
            g = (int)((1.1644 * (double)y) - (0.3918 * (double)u) - (0.8130 * (double)v));
            b = (int)((1.1644 * (double)y) + (2.0172 * (double)u));

            clip_8_bit( b );
            clip_8_bit( g );
            clip_8_bit( r );

            /*rgb[(i * w + j) * 4 + 0] = r;
            rgb[(i * w + j) * 4 + 1] = g;
            rgb[(i * w + j) * 4 + 2] = b;
            rgb[(i * w + j) * 4 + 3] = 0;*/
            *output++=b;
            *output++=g;
            *output++=r;
        }
    }

    return result;
}

And then, my call is:

mMatFrame = yv12ToRgb( buffer, IMAGE_WIDTH, IMAGE_HEIGHT );
QImage qVideoCam((uchar*)mMatFrame.data, mMatFrame.cols, mMatFrame.rows, mMatFrame.step, QImage::Format_RGB888);
emit imageReady(qVideoCam);

Thank's to all for the help and sorry the inconvenieces :)

Upvotes: 0

Related Questions