Reputation: 347
I have an image with its background blurred out. What I need to do is remove the blurred background and only keep the sharp foreground objects. Is their any way of doing this using openCV? The Image will be something like the one below. I need to detect and subtract the blurred background.
Upvotes: 1
Views: 4151
Reputation: 3143
This question has been open for a while and I got directed here from another question. I figured I'd put up an answer with some code just to put some implementation behind the ideas of the previous answers.
Start off with Canny edge detection to find the foreground:
Dilate the image to connect up the canny lines. Use findContours and select the biggest one to create a mask.
There's holes in the mask because the contour is hitting the edge of the image. We can fill in small holes by inverting the mask and using findContours again. We'll filter out very large contours this time and draw the remaining contours onto the mask.
Now we just need to use the mask to crop out our image.
Here's the code
import cv2
import numpy as np
# load image
img = cv2.imread("foreground.jpg");
# grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY);
# canny
canned = cv2.Canny(gray, 100, 200);
# dilate to close holes in lines
kernel = np.ones((5,5),np.uint8)
mask = cv2.dilate(canned, kernel, iterations = 1);
# find contours
# Opencv 3.4, if using a different major version (4.0 or 2.0), remove the first underscore
_, contours, _ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
# find big contours
biggest_cntr = None;
biggest_area = 0;
for contour in contours:
area = cv2.contourArea(contour);
if area > biggest_area:
biggest_area = area;
biggest_cntr = contour;
# draw contours
crop_mask = np.zeros_like(mask);
cv2.drawContours(crop_mask, [biggest_cntr], -1, (255), -1);
# fill in holes
# inverted
inverted = cv2.bitwise_not(crop_mask);
# contours again
_, contours, _ = cv2.findContours(inverted, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
# find small contours
small_cntrs = [];
for contour in contours:
area = cv2.contourArea(contour);
if area < 20000:
print(area);
small_cntrs.append(contour);
# draw on mask
cv2.drawContours(crop_mask, small_cntrs, -1, (255), -1);
# opening + median blur to smooth jaggies
crop_mask = cv2.erode(crop_mask, kernel, iterations = 1);
crop_mask = cv2.dilate(crop_mask, kernel, iterations = 1);
crop_mask = cv2.medianBlur(crop_mask, 5);
# crop image
crop = np.zeros_like(img);
crop[crop_mask == 255] = img[crop_mask == 255];
# show
cv2.imshow("original", img);
cv2.imshow("gray", gray);
cv2.imshow("canny", canned);
cv2.imshow("mask", crop_mask);
cv2.imshow("cropped", crop);
cv2.waitKey(0);
Upvotes: 2
Reputation: 3115
You could start with a simple canny edge detector, which would already give you hints on how to solve the problem:
From there on, you should be looking for a suitable iteration to map the pixels within the edges to a new image.
Upvotes: 2
Reputation:
This is a priori a difficult task, because flat areas (such as the shirt) have the same appearance as the blurred ones (i.e. low gradient activity). One can try some segmentation method and rate the edge strength around every region, but this isn't straightforward.
For a poor man's solution, here is what I tried:
use an edge detector and binarize so that the areas of interest are enclosed;
perform connected components analysis and select the largest blob (the blurred area);
hole-fill the blob to get a solid mask.
Upvotes: 4