Reputation: 41
I want to get the image Difference for the print which is captured using camera.
I tried many solution using python libraries: opencv, image-magic, etc.
The solution I found for image comparison is for better accuracy is:
Condition to capture Image : 1. camera will never move (mounted over a fix stand). 2. Object is placed manually over a white sheet, thus the object will never be properly aligned. (slight variation in angle every time, as it is manual )
Image Sample captured using camera for the bellow code :
Image sample 1: white Dots :
Image sample 2: as original image
Image sample 3: black dots
Accepted Output for print with white dots is not available, but it should only mark the difference(defect) :
Currently I am using following Image-magic command for image difference:
compare -highlight-color black -fuzz 5% -metric AE Image_1.png Image_2.png -compose src diff.png
Code :
import subprocess
# -fuzz 5% # ignore minor difference between two images
cmd = 'compare -highlight-color black -fuzz 5% -metric AE Input.png output.png -compose src diff.png '
subprocess.call(cmd, shell=True)
Output after difference is incorrect as the comparison works pixel to pixel, it is not smart enough to mark only the real difference:
The above solution which I mention will work to get required difference as output, but there is no library or image-magic command available for such image comparison.
Any python code OR Image-magic command for doing this?
Upvotes: 3
Views: 5003
Reputation: 53081
Although you do not want point-by-point processing, here is a subimage-search compare using Imagemagick. It pads one image after cropping off the black and then shifts the smaller to find the best match locations with the larger.
convert image1.jpg -gravity north -chop 0x25 image1c.png
convert image2.jpg -gravity north -chop 0x25 -gravity center -bordercolor "rgb(114,151,157)" -border 20x20 image2c.png
compare -metric rmse -subimage-search image2c.png image1c.png null:
1243.41 (0.0189732) @ 22,20
convert image2c.png image1c.png -geometry +22+20 -compose difference -composite -shave 22x20 -colorspace gray -auto-level +level-colors white,red diff.png
ADDITION:
If you want to just use compare, then you need to add -fuzz 15% to the compare command:
compare -metric rmse -fuzz 15% -subimage-search image2c.png image1c.png diff.png
Two images are produced. The difference image is the first, so look at diff-0.png
Upvotes: 2
Reputation: 974
It seems you are doing some defect detection task. The first solution comes in my mind is the image registration technique. First try to take the images in the same conditions (lighting, camera angle and ...) (one of your provided images is bigger 2 pixels).
Then you should register two images and match one to the other one, like this
Then wrap them with the help of homography matrix, and generate an aligned image, in this case, the result is like this:
Then take the difference of aligned image with the query image and threshold it, the result:
As I said if you try to take your frames with more precise, the registration result will be better and cause more accurate performance.
The codes for each part: (mostly taken from here).
import cv2
import numpy as np
MAX_FEATURES = 1000
GOOD_MATCH_PERCENT = 0.5
def alignImages(im1, im2):
# Convert images to grayscale
im1Gray = cv2.cvtColor(im1, cv2.COLOR_BGR2GRAY)
im2Gray = cv2.cvtColor(im2, cv2.COLOR_BGR2GRAY)
# Detect ORB features and compute descriptors.
orb = cv2.ORB_create(MAX_FEATURES)
keypoints1, descriptors1 = orb.detectAndCompute(im1Gray, None)
keypoints2, descriptors2 = orb.detectAndCompute(im2Gray, None)
# Match features.
matcher = cv2.DescriptorMatcher_create(cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING)
matches = matcher.match(descriptors1, descriptors2, None)
# Sort matches by score
matches.sort(key=lambda x: x.distance, reverse=False)
# Remove not so good matches
numGoodMatches = int(len(matches) * GOOD_MATCH_PERCENT)
matches = matches[:numGoodMatches]
# Draw top matches
imMatches = cv2.drawMatches(im1, keypoints1, im2, keypoints2, matches, None)
cv2.imwrite("matches.jpg", imMatches)
# Extract location of good matches
points1 = np.zeros((len(matches), 2), dtype=np.float32)
points2 = np.zeros((len(matches), 2), dtype=np.float32)
for i, match in enumerate(matches):
points1[i, :] = keypoints1[match.queryIdx].pt
points2[i, :] = keypoints2[match.trainIdx].pt
# Find homography
h, mask = cv2.findHomography(points1, points2, cv2.RANSAC)
# Use homography
height, width, channels = im2.shape
im1Reg = cv2.warpPerspective(im1, h, (width, height))
return im1Reg
if __name__ == '__main__':
# Read reference image
refFilename = "vv9gFl.jpg"
imFilename = "uP3CYl.jpg"
imReference = cv2.imread(refFilename, cv2.IMREAD_COLOR)
im = cv2.imread(imFilename, cv2.IMREAD_COLOR)
# Registered image will be resotred in imReg.
# The estimated homography will be stored in h.
imReg = alignImages(im, imReference)
# Write aligned image to disk.
outFilename = "aligned.jpg"
cv2.imwrite(outFilename, imReg)
for image difference and thresholding: alined = cv2.imread("aligned.jpg" , 0) alined = alined[:, :280]
b = cv2.imread("vv9gFl.jpg", 0 )
b = b[:, :280]
print (alined.shape)
print (b.shape)
diff = cv2.absdiff(alined, b)
cv2.imwrite("diff.png", diff)
threshold = 25
alined[np.where(diff > threshold)] = 255
alined[np.where(diff <= threshold)] = 0
cv2.imwrite("threshold.png", diff)
If you have lots of images and want to do defect detecting task I suggest using Denoising Autoencoder to train a deep artificial neural network. Read more here.
Upvotes: 7