Reputation: 157
Hi I am trying to perform image alignment and focus stacking and I have achieved some results. The stacked image produced still has some noise and not the best result desired. As per my understanding it is because of the image alignment performed before stitching the images together. The noise can also be due to the approach used where the alignment is done using the pixel matching. I came across an article here: https://www.mfoot.com/blog/2011/07/08/enfuse-for-extended-dynamic-range-and-focus-stacking-in-microscopy/ This talks about an alternative approach where instead of the matching pixels from images it considers the pixels from local neighbourhood. And I can't find anything on this. Can someone guide me to any resources which might be helpful.
detector = cv2.ORB_create(1000)
image_1_kp, image_1_desc = detector.detectAndCompute(image1gray, None)
Upvotes: 1
Views: 849
Reputation: 3763
I will give you a more general example using an OpenCV example for feature matching, then using the matched features as keypoints for estimating a transform that aligns the image.
As the example, given two images
You can do feature matching in OpenCV as follows
import cv2
import numpy as np
img1 = cv2.imread("box.png",0) # queryImage
img2 = cv2.imread("box_in_scene.png",0) # trainImage
H, W = img1.shape
# Initiate SIFT detector
orb = cv2.ORB_create(1000)
# find the keypoints and descriptors with SIFT
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Match descriptors.
matches = bf.match(des1,des2)
# Sort them in the order of their distance.
matches = sorted(matches, key = lambda x:x.distance)
From there you can get the 10 best matching keypoints and use them to estimate the transform.
# Get 10 best matching keypoints
query_pts = np.array([np.array(kp1[match.queryIdx].pt) for match in matches[:10]])
train_pts = np.array([np.array(kp2[match.trainIdx].pt) for match in matches[:10]])
# Estimate transform
M = cv2.estimateAffine2D(train_pts, query_pts)[0]
# Warp image
img3 = cv2.warpAffine(img2, M, (W, H))
The aligned image should look like
Upvotes: 2