Reputation: 138
I wanted to compare the similarities between two signatures.
Python version : 3.7.7
OpenCv version : 4.2.0
Here's what I have done so far :
from cv2 import *
import numpy as np
#uploading images
template = cv2.imread("C://Users//subhr//Ams_1.jpg")
original = cv2.imread("C://Users//subhr//Ams_2.jpg")
#resizing images
template = cv2.resize(template,(528,152))
cv2.imshow("template image", template)
cv2.waitKey(0)
cv2.destroyAllWindows()
template.shape #row.columns
original = cv2.resize(original,(528,152))
cv2.imshow("original image", original)
cv2.waitKey(0)
cv2.destroyAllWindows()
#ORB Detector
orb = cv2.ORB_create()
original = cv2.Canny(original, 50, 200)
template = cv2.Canny(template, 50, 200)
# key points and descriptor calculation
kp1, desc_1 = orb.detectAndCompute(template, None)
kp2, desc_2 = orb.detectAndCompute(original, None)
#creating matches
matcher = cv2.DescriptorMatcher_create(cv2.DescriptorMatcher_BRUTEFORCE_HAMMING)
matches_1 = matcher.knnMatch(desc_1, desc_2, 2)
len(matches_1)
result = cv2.drawMatchesKnn(original, kp1 , template, kp2, matches_1, None)
cv2.imshow("result", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
#distance similarity
good_points = []
for m,n in matches_1:
if m.distance < 0.8* n.distance:
good_points.append(m)
len(good_points)
result = cv2.drawMatches(original, kp1 , template, kp2, good_points, None)
cv2.imshow("result", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
print(len(kp1))
print(len(kp2))
#calculating ratio
print("How good is the match : ",len(kp1)/len(good_points))
At this point , I have tried orb
and akaze
, I wanted to use SIFT
but its not available. I have tried Opencv version 3.4 but no luck.
Is there a better way to compare similarities in signatures and standardize the entire process?
link for images : https://ibb.co/yhvTrng , https://ibb.co/xfBzCgW
Thank you.
Upvotes: 0
Views: 2095
Reputation: 4718
You haven't given any example images, but even so I'm not sure that using feature points such as orb or Kaze/Akaze would be a good idea at all. Those more or less still detect corner-like points when comparing signatures with some degree of accuracy seems to require much more knowledge (curvature etc). This looks like something that a simple Convolutional network will be good at. Off the top of my head, I think you can use for example an architecture like this:
signature --> Convnet --> head-1 --> signature embedding ------>---
| |---> (Hinge) loss
| |
--> head-2 --> signtature embedding ---->----
where the loss forces the embedding of similar signatures to be close and the embeddings of dissimilar ones to be further away from each other.
You could base your architecture on this somewhat old paper. Regarding the data to train your network, there are many datasets (have a look at this kaggle link for example).
EDIT: Since you say in your comment that you prefer to avoid Deep Learning based approaches, I think it would be useful to see what your current approach lacks in term of features.
The low-level features that you extract (don't extract ORB though, it's designed to be fast, not accurate, stick with KAZE or AKAZE if you can) capture local information, but what matters for signature recognition is how they are spatially distributed (the distribution should be roughly the same in similar signatures). This might be addressed by either or both of those things: 1) Change the way you evaluate similarity to incorporate spatial distribution instead of just relying on the number of common feature points 2) Design a high-level complementary feature by hand that captures global aspects of the signatures (that could be as simple as an height/width ratio or could be more complex and take curvature into account etc).
Upvotes: 1