Petersaber
Petersaber

Reputation: 893

OpenCV - odd HSV range detection

I have a Qt app where I have to find the HSV range of a couple of pixels around click coordinates, to track later on. This is how I do it:

    cv::Mat temp;
    cv::cvtColor(frame, temp, CV_BGR2HSV); //frame is pulled from a video or jpeg
    cv::Vec3b hsv=temp.at<cv::Vec3b>(frameX,frameY); //sometimes SIGSEGV?
    qDebug() << hsv.val[0]; //look up H
    qDebug() << hsv.val[1]; //look up S
    qDebug() << hsv.val[2]; //look up V
    //just base values so far, will work on range later
    emit hsvDownloaded(hsv.val[0], hsv.val[0]+5, hsv.val[1], 255, hsv.val[2], 255); //send to GUI which automaticly updates worker thread

Now, things are odd. Those are the results (red circle indicates the click location): clicking blue clicked red - correct

With red it's weird, upper half of the shape is detected correctly, lower half is not, despite it being a solid mass of the same colour. green completly screws up

And for an actual test enter image description here

It detects HSV {95,196,248} which is frankly absurd (base values way too high). None of the pixels that were detected isn't even the one that was clicked. The best values to detect that ball 100% of the time are H:35-141 S:0-238 V:65-255. I've wanted to get a HSV range from a normalized histogram, but I can't even get the base values right. What's up? When OpenCV pulls a frame using kalibrowanyPlik.read(frame); , the default colour scheme is BGR, right?

Why would the colour detection work so randomly?

Upvotes: 0

Views: 737

Answers (1)

Micka
Micka

Reputation: 20160

As berak has mentioned, your code looks like you've used the indices to access pixel in the wrong order.

That means your pixel locations are wrong, except for pixel that lie on the diagonal, so clicked objects that are around the diagonal will be detected correctly, while all the others won't.

To not get confused again and again, I want you to understand why OpenCV uses (row,col) ordering for indices:

OpenCV uses matrices to represent images. In mathematics, 2D matrices use (row,col) indexing, have a look at http://en.wikipedia.org/wiki/Index_notation#Two-dimensional_arrays and watch at the indices. So for matrices, it is typical to use the row index first, followed by the column index.

Unfortunately, images and pixel typically have a (x,y) indexing, which corresponds to x/y axis/direction in mathematical graphs and coordinate systems. So here the x position is used first, followed by the y position.

Luckily, OpenCV provides two different versions of .at method, one to access pixel-positions and one to access matrix elements (which are exactly the same elements in the end).

matrix.at<type>(row,column) // matrix indexing to access elements
// which equals
matrix.at<type>(y,x)

and

matrix.at<type>(cv::Point(x,y)) // pixel/position indexing to access elements

since the first version should be slightly more efficient it should be preferred if the positions aren't already given as cv::Point objects. So the best way often is to remember, that openCV uses matrices to represent images and it uses matric index notations to access elements.

btw, I've seen people wondering why matrix.at<type>(cv::Point(y,x)) doesn't work the way intended after they've learned that openCV images use the "wrong ordering". I hope this question doesn't come up after my explanation.

one more btw: in school I already wondered, why matrices index rows first, while graphs of functions index x axis first. I found it stupid to not use the "same" ordering for both but I still had to live with it :D (and at the end, both don't have much to do with the other)

Upvotes: 3

Related Questions