Alex Wade
Alex Wade

Reputation: 747

Finding white pixels on monitor in camera image

I have a camera pointed at a monitor displaying a line of white pixels. I get an array of byte values back from the camera. The area of the camera's view is larger than the space taken up by the monitor. I need to find out where on the camera image the white monitor pixels appear. See the sample image below.Sample Image

I need to improve my algorithm so that it is more robust in varying lighting conditions. Specifically, I need to improve the step where I determine the value threshold for what are considered potential white pixels. After determining what are likely my white pixels, I find the largest neighborhood to determine my final white values.

I also tried counting the N pixels with the highest values and considering the lowest value of the N pixels as white. This worked fairly well in some conditions, but it stopped working when the room got slightly darker. I can tweak N to work in any lighting condition, but I would prefer not to have to provide any parameters manually. I am experimenting with using percentiles now, but it is running quite slow since the data set is very large.

Here's one of the methods that works decently, but the parameters have to be tweaked in different lighting conditions.

std::multiset<uint8_t> maxPixelValues;
for(unsigned i = 0; i < width; ++i)
{
    for(unsigned j = 0; j <height; ++j)
    {
        uint8_t pixelValue = buffer[j * width + i];
        if(maxPixelValues.size() < topPixelCount)
        {
            maxPixelValues.insert(pixelValue);
        }
        else
        {
            auto minimumValuePosition = maxPixelValues.begin();
            if(pixelValue > *minimumValuePosition)
            {
                maxPixelValues.erase(minimumValuePosition);
                maxPixelValues.insert(pixelValue);
            }
        }
    }
}
return *maxPixelValues.begin();

Upvotes: 1

Views: 333

Answers (2)

smocking
smocking

Reputation: 3709

First you might want to threshold at one standard deviation above the mean to get rid of the darker parts of the screen. Then you can take advantage of the fact that the line is quite thin compared to some of the brighter area in the background and also far away from other bright areas thanks to the edge of the screen.

Pseudocode:

mask=threshold(img, mean(img)+stdev(img))
toignore=dilate(mask,3,3) 
toignore=erode(toignore,4,4) 
toignore=dilate(toignoe,3,3)
mask=mask &! toignore
  1. Thresholding at mean+sd: Mask thresholded at mean+sd
  2. Dilation: Dilation
  3. Erosion with slightly bigger kernel to remove 1px thin objects (such as the line), but keep pixels that are near other bright ones Erosion
  4. Dilation to add a margin smaller than the screen border: Dilation again
  5. Thresholded mask from 1 with toignore from 4 excluded: Mask with dilated/eroded/dilated stuff ignored

There are a few stray pixels left, but you can probably do a hough transform at this point.

Upvotes: 3

ssk
ssk

Reputation: 9275

You can use Hough transform to find lines on an image: http://en.wikipedia.org/wiki/Hough_transform

Here is the openCV api: http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_lines/hough_lines.html

Upvotes: 0

Related Questions