Karan
Karan

Reputation: 49

Slight Misalignment Issue on Stitching Images Together in Grid-Like Manner

Slight Misalignment Issue on Stitching Images Together in Grid-Like Manner

I am trying to create high res composite images of rugs, with all parts in focus. For this the best approach I figured out was to take several shots of different parts of the rug and then stitch them together.

I was initially doing this with Photoshop API, but very recently (within past 30 days) Adobe has bundled Photoshop API into Firefly which is currently enterprise only. So now I am trying to create a OpenCV function for the same.

My original images are:-

Top to bottom (Image 5, 3, 1) Top to bottom (Image 6, 4, 2)
5
Image 5
6
Image 6
3
Image 3
4
Image 4
1
Image 1
2
Image 2

High Resolution versions of these images are available here

I tried doing this with stitcher class of Open CV, even played around with several parameters but they seem to work with some sets of images, and not with all. So currently doing this manual approach.

Current Approach

I am doing pairwise stitching. So in first iteration I will stitch the rows together

1 gets stitched with 2

3 gets stiched with 4

5 gets stitched with 6

Results are decent
1
Row 1
2
Row 2
3
Row 3

I am able to do blending and exposure compensation for seamless results, so we can ignore them in these test images.

Next step is I stitch row 1 with row 2, to result and then stitch those results with row 3 to get the final composite. The results are :-

Final Results
1 & 2
Row 1 & 2
final
Above stitched with Row 3

I have not cropped the images to maintain trueness, so please excuse the additional white parts in the stitched images.

Problem

If you look at the final image, it has slight misalignments at the edges of the rug. This is what i am unable to remove.

My code is

import cv2
from loader import load_images
from stitcher import stitch_images_manual

def main():
    corrected_images_dir = "out"
    num_images = 6  # Adjust this to the number of images you have
    rows = 3  # Number of rows in your grid
    columns = 2  # Number of columns in your grid

    # Load images
    images = load_images(corrected_images_dir, num_images)
    if len(images) == 0:
        print("No images loaded.")
        return

    # Stitch images in pairs for each row
    row_images = []
    for i in range(rows):
        start_idx = i * columns
        pair = [images[start_idx], images[start_idx + 1]]
        print(f'Stitching row {i + 1} with images {start_idx + 1} and {start_idx + 2}')
        stitched_row = stitch_images_manual(pair[0], pair[1])
        if stitched_row is not None:
            row_images.append(stitched_row)
            cv2.imwrite(f'out/stitched_row_{i + 1}.png', stitched_row)
            print(f'Saved stitched row {i + 1} as stitched_row_{i + 1}.png')
        else:
            print(f'Stitching failed for row {i + 1}')
            return

if __name__ == "__main__":
    main()


import cv2
import numpy as np
from features import detect_and_compute_features
from homography import compute_homography
from warp import warp_image
from resize import resize_image
from camera_params import estimate_initial_camera_params, refine_camera_params

def stitch_images_manual(image1, image2, scale_percent=50):
    print(f"Stitching images of shapes: {image1.shape} and {image2.shape}")

    # # Resize images to medium resolution
    # image1_resized = resize_image(image1, scale_percent)
    # image2_resized = resize_image(image2, scale_percent)

    # Convert images to grayscale
    gray1 = cv2.cvtColor(image1, cv2.COLOR_BGRA2GRAY)
    gray2 = cv2.cvtColor(image2, cv2.COLOR_BGRA2GRAY)

    # Detect SIFT features and compute descriptors
    keypoints1, descriptors1 = detect_and_compute_features(gray1)
    keypoints2, descriptors2 = detect_and_compute_features(gray2)

    # Match features using FLANN matcher
    FLANN_INDEX_KDTREE = 1
    index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=10)
    search_params = dict(checks=500)
    flann = cv2.FlannBasedMatcher(index_params, search_params)
    matches = flann.knnMatch(descriptors1, descriptors2, k=2)

    # Apply ratio test
    good_matches = []
    for m, n in matches:
        if m.distance < 0.7 * n.distance:
            good_matches.append(m)

    if len(good_matches) < 4:
        print("Not enough good matches to compute homography.")
        return None

    # Extract location of good matches
    points1 = np.zeros((len(good_matches), 2), dtype=np.float32)
    points2 = np.zeros((len(good_matches), 2), dtype=np.float32)

    for i, match in enumerate(good_matches):
        points1[i, :] = keypoints1[match.queryIdx].pt
        points2[i, :] = keypoints2[match.trainIdx].pt

    # Compute homography
    h = compute_homography(points1, points2)

    if h is None:
        return None

    # Get dimensions of input images
    height1, width1 = image1.shape[:2]
    height2, width2 = image2.shape[:2]

    # Determine canvas size based on stitching direction
    if height1 > height2:
        canvas_width = max(width1, width2)
        canvas_height = height1 + height2
    else:
        canvas_width = width1 + width2
        canvas_height = max(height1, height2)

    # Warp the second image to the first image's plane
    warped_image2 = warp_image(image2, h, (canvas_width, canvas_height))

    # Create the stitched image canvas
    result = np.zeros((canvas_height, canvas_width, 4), dtype=np.uint8)
    result[0:height1, 0:width1] = image1

    # Create a mask of where the warped image has valid pixels
    mask = np.any(warped_image2 != 0, axis=2)

    # Paste the warped image pixels into the result image
    result[mask] = warped_image2[mask]

    return result

import cv2
import numpy as np

def warp_image(image, homography, canvas_size):
    warped_image = cv2.warpPerspective(image, homography, canvas_size)
    # cv2.imshow("warped",warped_image)
    # cv2.waitKey(0)
    # cv2.destroyAllWindows
    return warped_image


import os
import cv2

def load_images(image_dir, num_images):
    images = []
    for i in range(1, num_images + 1):
        image_path = os.path.join(image_dir, f'{i}.png')
        if os.path.exists(image_path):
            img = cv2.imread(image_path, cv2.IMREAD_UNCHANGED)
            if img.shape[2] == 3:
                img = cv2.cvtColor(img, cv2.COLOR_BGR2BGRA)
            images.append(img)
            print(f'Loaded image {i} with shape {img.shape}')
        else:
            print(f'Image {image_path} not found.')
    return images


import cv2
import numpy as np

def compute_homography(points1, points2):
    h, mask = cv2.findHomography(points2, points1, cv2.RANSAC)
    if h is None:
        print("Homography computation failed.")

    return h

import cv2

def detect_and_compute_features(image):
    sift = cv2.SIFT_create()
    keypoints, descriptors = sift.detectAndCompute(image, None)
    return keypoints, descriptors

Currently, I manually stitch the rows first, then rotate the results and feed them again by changing the rows parameter. This is just lazy working on my part and easily resolvable so please ignore this as well.

Any ideas on what can I do to improve the results? I tried stitching images after removing background but even that doesn't work.

Any tips or pointers that could guide me in the right direction would be much appreciated.

I have tried several packages for stitching from github but all give similar misaligned results.

Upvotes: 1

Views: 182

Answers (1)

Lukas Weber
Lukas Weber

Reputation: 721

As a summary of the discussion in the comments: OpenCVs stitching module supports at least two ways of taking Images

panorama (sphere, no translation, perspectives)

panorama

flatbed scans (plane, translations, no perspectives)

scans

Your use case doesn't fit in one of the two. However, as Christoph Rackwitz pointed out:

if the carpet lies flat on the ground, the surface to be textured is a plane. it is not a sphere. homography is appropriate for composing pictures of the carpet into one big texture of it.

and indeed, OpenCV is capable of estimating a homography for your Images.

OpenCV offers different warpers for homography ( see stitching_detailed.py or the stitching package). You might get better results with the plane warper

Upvotes: 2

Related Questions