aries
aries

Reputation: 909

Calculating sharpness of an image

I found on the internet that laplacian method is quite good technique to compute the sharpness of a image. I was trying to implement it in opencv 2.4.10. How can I get the sharpness measure after applying the Laplacian function? Below is the code:

Mat src_gray, dst;
int kernel_size = 3;
int scale = 1;
int delta = 0;
int ddepth = CV_16S;

GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );

/// Convert the image to grayscale
cvtColor( src, src_gray, CV_RGB2GRAY );

/// Apply Laplace function
Mat abs_dst;

Laplacian( src_gray, dst, ddepth, kernel_size, scale, delta, BORDER_DEFAULT );

//compute sharpness
??

Can someone please guide me on this?

Upvotes: 16

Views: 37023

Answers (2)

Kornel
Kornel

Reputation: 5354

Possible duplicate of: Is there a way to detect if an image is blurry?

so your focus measure is:

cv::Laplacian(src_gray, dst, CV_64F);

cv::Scalar mu, sigma;
cv::meanStdDev(dst, mu, sigma);

double focusMeasure = sigma.val[0] * sigma.val[0];

Edit #1:

Okay, so a well focused image is expected to have sharper edges, so the use of image gradients are instrumental in order to determine a reliable focus measure. Given an image gradient, the focus measure pools the data at each point as an unique value.

The use of second derivatives is one technique for passing the high spatial frequencies, which are associated with sharp edges. As a second derivative operator we use the Laplacian operator, that is approximated using the mask:

enter image description here

To pool the data at each point, we use two methods. The first one is the sum of all the absolute values, driving to the following focus measure:

enter image description here

where L(m, n) is the convolution of the input image I(m, n) with the mask L. The second method calculates the variance of the absolute values, providing a new focus measure given by:

enter image description here

where L overline is the mean of absolute values.

Read the article

J.L. Pech-Pacheco, G. Cristobal, J. Chamorro-Martinez, J. Fernandez-Valdivia, "Diatom autofocusing in brightfield microscopy: a comparative study", 15th International Conference on Pattern Recognition, 2000. (Volume:3 )

for more information.

Upvotes: 21

Vektorsoft
Vektorsoft

Reputation: 381

Not exactly the answer, but I got a formula using an intuitive approach that worked on the wild.

I'm currently working in a script to detect multiple faces in a picture with a crowd, using mtcnn , which it worked very well, however it also detected many faces so blurry that you couldn't say it was properly a face.

Example image:

Original image

Faces detected:

red squares for detected faces

Matrix of detected faces:

11x11 matrix faces

mtcnn detected about 123 faces, however many of them had little resemblance as a face. In fact, many faces look more like a stain than anything else...

So I was looking a way of 'filtering' those blurry faces. I tried the Laplacian filter and FFT way of filtering I found on this answer , however I had inconsistent results and poor filtering results.

I turned my research in computer vision topics, and finally tried to implement an 'intuitive' way of filtering using the following principle:

When more blurry is an image, less 'edges' we have

If we compare a crisp image with a blurred version of the same image, the results tends to 'soften' any edges or adjacent contrasting regions. Based on that principle, I was finding a way of weighting edges and then a simple way of 'measuring' the results to get a confidence value.

I took advantage of Canny detection in OpenCV and then apply a mean value of the result (Python):

def getBlurValue(image):
    canny = cv2.Canny(image, 50,250)
    return np.mean(canny)

Canny return 2x2 array same image size . I selected threshold 50,250 but it can be changed depending of your image and scenario.

Then I got the average value of the canny result, (definitively a formula to be improved if you know what you're doing).

When an image is blurred the result will get a value tending to zero, while crisp image tend to be a positive value, higher when crisper is the image.

This value depend on the images and threshold, so it is not a universal solution for every scenario, however a best value can be achieved normalizing the result and averaging all the faces (I need more work on that subject).

In the example, the values are in the range 0-27.

I averaged all faces and I got about a 3.7 value of blur

If I filter images above 3.7:

most blurred faces are filtered

So I kept with mosth crisp faces:

enter image description here

That consistently gave me better results than the other tests.

Ok, you got me. This is a tricky way of detecting a blurriness values inside the same image space. But I hope people can take advantage of this findings and apply what I learned in its own projects.

Upvotes: 25

Related Questions