Adi shukla
Adi shukla

Reputation: 303

Image Stitching warsPerspective size issue

I am trying to stitch two images. tech stack is opecv c++ on vs 2017.

The image that I had considered are:

image1 of code :

and

image2 of code:

I have found the homoography matrix using this code. I have considered image1 and image2 as given above.

    int minHessian = 400;
    Ptr<SURF> detector = SURF::create(minHessian);
    vector< KeyPoint > keypoints_object, keypoints_scene;
    detector->detect(gray_image1, keypoints_object);
    detector->detect(gray_image2, keypoints_scene);

    
    Mat img_keypoints;
    drawKeypoints(gray_image1, keypoints_object, img_keypoints);
    imshow("SURF Keypoints", img_keypoints);

    Mat img_keypoints1;
    drawKeypoints(gray_image2, keypoints_scene, img_keypoints1);
    imshow("SURF Keypoints1", img_keypoints1);
    //-- Step 2: Calculate descriptors (feature vectors)
    Mat descriptors_object, descriptors_scene;
    detector->compute(gray_image1, keypoints_object, descriptors_object);
    detector->compute(gray_image2, keypoints_scene, descriptors_scene);

    //-- Step 3: Matching descriptor vectors using FLANN matcher

    Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create(DescriptorMatcher::FLANNBASED);
    vector< DMatch > matches;
    matcher->match(descriptors_object, descriptors_scene, matches);


    double max_dist = 0; double min_dist = 100;

    //-- Quick calculation of max and min distances between keypoints 
    for (int i = 0; i < descriptors_object.rows; i++)
    {
        double dist = matches[i].distance;
        if (dist < min_dist) min_dist = dist;
        if (dist > max_dist) max_dist = dist;
    }

    printf("-- Max dist: %f \n", max_dist);
    printf("-- Min dist: %f \n", min_dist);


    //-- Use only "good" matches (i.e. whose distance is less than 3*min_dist )
    vector< DMatch > good_matches;
    Mat result, H;
    for (int i = 0; i < descriptors_object.rows; i++)
    {
        if (matches[i].distance < 3 * min_dist)
        {
            good_matches.push_back(matches[i]);
        }
    }
    Mat img_matches;
    drawMatches(gray_image1, keypoints_object, gray_image2, keypoints_scene, good_matches, img_matches, Scalar::all(-1),
        Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
    imshow("Good Matches", img_matches);
    std::vector< Point2f > obj;
    std::vector< Point2f > scene;
    cout << "Good Matches detected" << good_matches.size() << endl;
    for (int i = 0; i < good_matches.size(); i++)
    {
        //-- Get the keypoints from the good matches
        obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
        scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
    }


    // Find the Homography Matrix for img 1 and img2
    H = findHomography(obj, scene, RANSAC);

The next step would be to warp these. I used perspectivetransform function to find the corner of image1 on the stitched image. I had considered this as the number of columns to be used in the Mat result.This is the code I wrote ->

    vector<Point2f> imageCorners(4);
    imageCorners[0] = Point(0, 0);
    imageCorners[1] = Point(image1.cols, 0);
    imageCorners[2] = Point(image1.cols, image1.rows);
    imageCorners[3] = Point(0, image1.rows);
    vector<Point2f> projectedCorners(4);
    perspectiveTransform(imageCorners, projectedCorners, H);
    Mat result;
    warpPerspective(image1, result, H, Size(projectedCorners[2].x, image1.rows));
    Mat half(result, Rect(0, 0, image2.cols, image2.rows));
    image2.copyTo(half);
    imshow("result", result);

I am getting a stitched output of these images. But the issue is with the size of the image. I was doing a comparison by combining the two original images manually with the result of the above code. The size of the result from code is more. What should I do to make it of perfect size? The ideal size should be image1.cols + image2.cols - overlapping length.

Upvotes: 0

Views: 187

Answers (1)

Burak
Burak

Reputation: 2495

warpPerspective(image1, result, H, Size(projectedCorners[2].x, image1.rows));

This line seems problematic. You should choose the extremum points for the size.

Rect rec = boundingRect(projectedCorners);
warpPerspective(image1, result, H, rec.size());

But you will lose the parts if rec.tl() falls to negative axes, so you should shift the homography matrix to fall in the first quadrant. See Warping to perspective section of my answer to Fast and Robust Image Stitching Algorithm for many images in Python.

Upvotes: 0

Related Questions