Reputation: 73
I am trying to implement KAZE
and A-KAZE
using Python and OpenCV for Feature Detection and Description on an aerial image.
What is the code?
Also, what descriptor should go along with it for Feature Matching?
Upvotes: 3
Views: 6613
Reputation: 2698
KAZE
, as well as some previous state-of-the-art methods such as SIFT
and SURF
, are Local Feature Descriptors, and in some ways, it shows better performance in both detection and description compared to SIFT
descriptor. A-KAZE
, on the other hand, is a Local Binary Descriptor and presents excellent results in terms of speed and performance compared to state-of-the-art methods such as Local Feature Descriptors: SIFT
, SURF
, and KAZE
, and compared to Local Binary Descriptors: ORB
, and BRISK
.
Responding to your question, both of them can go along with it for Feature Matching, although, A-KAZE
descriptor do not fit appropriately in smaller patches (e.g., smallest images — 32x32 patch), that is, in order to avoid the return of keypoints without descriptors, A-KAZE
normally remove the keypoints.
Therefore, the choice between KAZE
and A-KAZE
depends on the context of your application. But, a priori A-KAZE
has a better performance than KAZE
.
In this example, I will show you Feature Detection and Matching with A-KAZE
through the FLANN
algorithm using Python and OpenCV.
First, load the input image and the image that will be used for training.
In this example, we are using those images:
image1
:
image2
:
# Imports
import cv2 as cv
import matplotlib.pyplot as plt
import numpy as np
# Open and convert the input and training-set image from BGR to GRAYSCALE
image1 = cv.imread(filename = 'image1.jpg',
flags = cv.IMREAD_GRAYSCALE)
image2 = cv.imread(filename = 'image2.jpg',
flags = cv.IMREAD_GRAYSCALE)
Note that when importing the images, we use the flags = cv.IMREAD_GRAYSCALE
parameter, because in OpenCV the default color mode setting is BGR. Therefore, to work with Descriptors, we need to convert the color mode pattern from BGR to grayscale.
Now we will use the A-KAZE
algorithm:
# Initiate A-KAZE descriptor
AKAZE = cv.AKAZE_create()
# Find the keypoints and compute the descriptors for input and training-set image
keypoints1, descriptors1 = AKAZE.detectAndCompute(image1, None)
keypoints2, descriptors2 = AKAZE.detectAndCompute(image2, None)
The features detected by the A-KAZE
algorithm can be combined to find objects or patterns that are similar between different images.
Now we will use the FLANN
algorithm:
# FLANN parameters
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm = FLANN_INDEX_KDTREE,
trees = 5)
search_params = dict(checks = 50)
# Convert to float32
descriptors1 = np.float32(descriptors1)
descriptors2 = np.float32(descriptors2)
# Create FLANN object
FLANN = cv.FlannBasedMatcher(indexParams = index_params,
searchParams = search_params)
# Matching descriptor vectors using FLANN Matcher
matches = FLANN.knnMatch(queryDescriptors = descriptors1,
trainDescriptors = descriptors2,
k = 2)
# Lowe's ratio test
ratio_thresh = 0.7
# "Good" matches
good_matches = []
# Filter matches
for m, n in matches:
if m.distance < ratio_thresh * n.distance:
good_matches.append(m)
# Draw only "good" matches
output = cv.drawMatches(img1 = image1,
keypoints1 = keypoints1,
img2 = image2,
keypoints2 = keypoints2,
matches1to2 = good_matches,
outImg = None,
flags = cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
plt.imshow(output)
plt.show()
And the output will be:
To perform the same example with the KAZE
descriptor, just initialize this descriptor, changing:
AKAZE = cv.AKAZE_create()
To:
KAZE = cv.KAZE_create()
To learn more about Detection, Description, and Feature Matching techniques, Local Feature Descriptors, Local Binary Descriptors, and algorithms for Feature Matching, I recommend the following repositories on GitHub:
Upvotes: 22