ill
ill

Reputation: 362

OpenCV unsatisfying results when finding Homography from ORB feature detection

Even though the ORB Feature Matching seems quite solid and i only take the 20 best matches for cv.findHomography, the resulting polyline is terrible. Note that in the results shown in the attached image, the top right image is a video stream. Therefor the variation in results matched. Is there a library that could be used to receive better results? Or am I doing any major mistakes in my code?

enter image description here

    # des1 & des2 are created with cv.ORB_create(10000, 1.2, nlevels=8, edgeThreshold=5)

    kp2, des2 = orb.detectAndCompute(gray, None)
    matches = bf.knnMatch(des1, des2, k=2)

    good = []
    for m, n in matches:
        if m.distance < 0.75 * n.distance:
            good.append(m)

    matches = sorted(good, key=lambda x: x.distance)
    src_pts = np.float32([kp1[m.queryIdx].pt for m in matches[:20]]).reshape(-1, 1, 2)
    dst_pts = np.float32([kp2[m.trainIdx].pt for m in matches[:20]]).reshape(-1, 1, 2)
    M, mask = cv.findHomography(dst_pts, src_pts, cv.RANSAC, 5.0)
    matchesMask = mask.ravel().tolist()

    h = src_pts.max(0)[0][1] - src_pts.min(0)[0][1]
    w = src_pts.max(0)[0][0] - src_pts.min(0)[0][0]
    pts = np.float32([[0, 0], [0, h - 1], [w - 1, h - 1], [w - 1, 0]]).reshape(-1, 1, 2)

    dst = cv.perspectiveTransform(pts, M)

    img3 = None
    img3 = cv.drawMatchesKnn(img1, kp1, gray, kp2, good, img3, flags=cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
    img3 = cv.polylines(img3, [np.int32(dst)], True, (0, 0, 255), 3, cv.LINE_AA)

    # Code for showing img3 would follow
    

Upvotes: 2

Views: 1298

Answers (1)

fdermishin
fdermishin

Reputation: 3686

There could be several problems with this setup:

  1. Pattern itself. It has repeated squares, so there could be matches that connect different squares on the first and on the second image. This can produce a lot of outliers so that homography can't fit in a reasonable way
  2. Low image quality. The smaller image has low resolution and a bit blurry, which makes matching more difficult, so more outliers can happen. The image has higher resolution and is just displayed in small scale, so this point is not valid.
  3. Feature points are located in a small region of an image and you try to project corners of the image, which are far from the points. This makes homography estimation very unstable so that uncertainties in coordinates of feature points become magnified several times. Even jitter of less than 1 pixel can result in projection errors of up to about 8 pixels. And this can be even worse, because RANSAC threshold of 5.0 can result in coordinates with lower precision.

Upvotes: 1

Related Questions