Reputation: 417
I've recently started to deal with OpenCV and now I'm trying to implement the check for a pattern picture in another picture.
The problem now is that the matcher finds good matches for keypoints but does not take into account their location relative to each other.
For example: In both cases the matcher found matches of the keypoints and did it correctly, but in the second case matches were found in different places on the picture. I would like to process this situation as a "pattern not found", preferably with built-in OpenCV functions.
A function for comparing key points:
def keypoint_match(pic, pattern, pic_desc, pattern_desc, drawDebug=False):
bf = cv2.BFMatcher()
matches = bf.knnMatch(pattern_desc[1], pic_desc[1], k=2)
good = []
for m, n in matches:
if (m.distance < 0.75 * n.distance):
good.append(m)
print('Good matches:', len(good))
find = True if len(good) > 10 else False
if drawDebug or find:
image = cv2.drawMatches(np.uint8(pattern), pattern_desc[0], np.uint8(pic), pic_desc[0], good, 0,
flags=0, matchColor=(0, 255, 0), singlePointColor=(255, 0, 0))
plt.figure(figsize=(20, 20))
plt.axis('off')
plt.imshow(image.astype(np.uint8))
plt.show()
return good, find
Result:
test1.png
Good matches: 32
Pattern found: 'smile.png'
Result: pattern found
test2.png
Good matches: 25
Pattern found: 'smile.png'
Result: pattern found
Test pictures and full source code I uploaded here.
UPD:
As far as I know, there are 3 methods of finding an object in the image:
My task is to find out if there is a certain emblem in the picture or not. The emblem in the picture may be slightly tilted or have a different scale. The background color of an image may also be different, for example, if it is a photo. As a rule, emblems have a difficult structure, but they have approximately the same contours.
That is why I chose the "Keypoint detection" method.
I have added 2 photos for the test.
The problem is that there are partial matches and I would like to exclude them, but I don't know how. Pictures are presented only to demonstrate the problem of partial matches.
Upvotes: 2
Views: 2189
Reputation: 3461
Disclaimer:
If I am correct in my understanding that this is a situation of the infamous XY-Problem and that you want in fact to find the smiling-image in the subsequent two test images irrespective of what exact technique is used, then I would like to present the following:
Solution: template_matching.py
import cv2
import numpy as np
from matplotlib import pyplot as plt
import sys
img_rgb = cv2.imread(sys.argv[1])
img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
template = cv2.imread('smile.png',0)
w, h = template.shape[::-1]
res = cv2.matchTemplate(img_gray,template,cv2.TM_CCOEFF_NORMED)
threshold = 0.8
loc = np.where( res >= threshold)
for pt in zip(*loc[::-1]):
cv2.rectangle(img_rgb, pt, (pt[0] + w, pt[1] + h), (0,0,255), 2)
cv2.imwrite('res.png',img_rgb)
that you can run so:
Result:
python3 template_matching.py test1.png
and it would give you a result like this:
whereas the other test image,
python3 template_matching.py test2.png
gives a result like this:
Explanation:
I may not be totally familiar with SIFT (the so-called best feature detection algorithm out there), but it is usually used to detect corners, extract features, so on. That alone is not sufficient to match (at least in my opinion). You need to put together those features to define an object. And this object you could then try to find in the test set of images.
An alternative approach I could think of, that might be more involved, and won't use some built-in blackbox opencv function, is using the Hough line transform to get the two vertical lines. And then the Hough circle transform to get the semi-circle formed by the smile. Together they would define your "smiley". but anyways, you do say in your question:
preferably with built-in OpenCV functions.
then the above solution is the quickest and most concise for your problem I think.
Upvotes: 1