nkint
nkint

Reputation: 11733

OpenGL doesn't like OpenCV resize

I'd like to use an opencv mat data as a opengl texture. I'm developing a Qt4.8 application (but passing through qimage is something i don't really need) extending a QGLWidget. But something is wrong..

First the problem in screenshot, then the code I'm using.

If I don't resize the cv::Mat (grabbed from a video) everything is ok. If I scale it as the half (scaleFactor=2) of the dimension, everything is ok. If the scale factor is 2.8 or 2.9.. everything is ok. But.. at some scaleFactor.. it is buggy.

Here the screenshots with a nice red background for understand the opengl quad dimension:

scaleFactor = 2 enter image description here

scaleFactor = 2.8 enter image description here

scaleFactor = 3 enter image description here

scaleFactor = 3.2 enter image description here

Now the code of the paint method. I have found the code for copy the cv::Mat data into the gl texture from this nice blog post.

void VideoViewer::paintGL()
{
    glClear (GL_COLOR_BUFFER_BIT);
    glClearColor (1.0, 0.0, 0.0, 1.0);

    glEnable(GL_BLEND);

    // Use a simple blendfunc for drawing the background
    glBlendFunc(GL_ONE, GL_ZERO);

    if (!cvFrame.empty()) {
        glEnable(GL_TEXTURE_2D);

        GLuint tex = matToTexture(cvFrame);
        glBindTexture(GL_TEXTURE_2D, tex);

        glBegin(GL_QUADS);
        glTexCoord2f(1, 1); glVertex2f(0, cvFrame.size().height);
        glTexCoord2f(1, 0); glVertex2f(0, 0);
        glTexCoord2f(0, 0); glVertex2f(cvFrame.size().width, 0);
        glTexCoord2f(0, 1); glVertex2f(cvFrame.size().width, cvFrame.size().height);
        glEnd();

        glDeleteTextures(1, &tex);
        glDisable(GL_TEXTURE_2D);

        glFlush();
    }
}

GLuint VideoViewer::matToTexture(cv::Mat &mat, GLenum minFilter, GLenum magFilter, GLenum wrapFilter)
{
    // http://r3dux.org/2012/01/how-to-convert-an-opencv-cvmat-to-an-opengl-texture/

    // Generate a number for our textureID's unique handle
    GLuint textureID;
    glGenTextures(1, &textureID);

    // Bind to our texture handle
    glBindTexture(GL_TEXTURE_2D, textureID);

    // Catch silly-mistake texture interpolation method for magnification
    if (magFilter == GL_LINEAR_MIPMAP_LINEAR  ||
        magFilter == GL_LINEAR_MIPMAP_NEAREST ||
        magFilter == GL_NEAREST_MIPMAP_LINEAR ||
        magFilter == GL_NEAREST_MIPMAP_NEAREST)
    {
        std::cout << "VideoViewer::matToTexture > "
                  << "You can't use MIPMAPs for magnification - setting filter to GL_LINEAR"
                  << std::endl;
        magFilter = GL_LINEAR;
    }

    // Set texture interpolation methods for minification and magnification
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, minFilter);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, magFilter);

    // Set texture clamping method
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, wrapFilter);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, wrapFilter);

    // Set incoming texture format to:
    // GL_BGR       for CV_CAP_OPENNI_BGR_IMAGE,
    // GL_LUMINANCE for CV_CAP_OPENNI_DISPARITY_MAP,
    // Work out other mappings as required ( there's a list in comments in main() )
    GLenum inputColourFormat = GL_BGR;
    if (mat.channels() == 1)
    {
        inputColourFormat = GL_LUMINANCE;
    }

    // Create the texture
    glTexImage2D(GL_TEXTURE_2D,     // Type of texture
                 0,                 // Pyramid level (for mip-mapping) - 0 is the top level
                 GL_RGB,            // Internal colour format to convert to
                 mat.cols,          // Image width  i.e. 640 for Kinect in standard mode
                 mat.rows,          // Image height i.e. 480 for Kinect in standard mode
                 0,                 // Border width in pixels (can either be 1 or 0)
                 inputColourFormat, // Input image format (i.e. GL_RGB, GL_RGBA, GL_BGR etc.)
                 GL_UNSIGNED_BYTE,  // Image data type
                 mat.ptr());        // The actual image data itself

    return textureID;
}

and how the cv::Mat is loaded and scaled:

void VideoViewer::retriveScaledFrame()
{
    video >> cvFrame;

    cv::Size s = cv::Size(cvFrame.size().width/scaleFactor, cvFrame.size().height/scaleFactor);
    cv::resize(cvFrame, cvFrame, s);
}

Some times the image is correctly rendered sometimes not.. why? For sure there is something wrong in some mismatch of the order of pixel storing between opencv and opengl.. but, how to resolve it? why sometimes is ok and sometimes no?

Upvotes: 1

Views: 709

Answers (1)

nkint
nkint

Reputation: 11733

Yes it was a problem of pixel storing in memory. OpenCV and OpenGL could store pixels in different ways, and I had to understood better how this works.

In OpenGL you can specify those parameters with glPixelStorei and GL_UNPACK_ALIGNMENT, GL_UNPACK_ROW_LENGTH.

A nice answer about this can be found here.

Upvotes: 1

Related Questions