user2417553
user2417553

Reputation: 23

Stitching final size and offset

I am making a stitching with opencv and Python. All works well, except one thing : I don't manage to compute the exact final size of the result picture. My image is always too big and i have black border. Moreover, the offset doesn't seem to be correct because there is a black line where pictures have merged.

Here is my function :

    def calculate_size(size_image1, size_image2, homography):

      ## Calculate the size and offset of the stitched panorama.

      offset = abs((homography*(size_image2[0]-1,size_image2[1]-1,1))[0:2,2]) 
      print offset
      size   = (size_image1[1] + int(offset[0]), size_image1[0] + int(offset[1]))
      if (homography*(0,0,1))[0][1] > 0:
        offset[0] = 0
      if (homography*(0,0,1))[1][2] > 0:
        offset[1] = 0

      ## Update the homography to shift by the offset
      homography[0:2,2] +=  offset

      return (size, offset)


## 4. Combine images into a panorama. [4] --------------------------------
def merge_images(image1, image2, homography, size, offset, keypoints):

  ## Combine the two images into one.
  panorama = cv2.warpPerspective(image2,homography,size)
  (h1, w1) = image1.shape[:2]

  for h in range(h1):
    for w in range(w1):
        if image1[h][w][0] != 0 or image1[h][w][3] != 0 or image1[h][w][4] != 0:
            panorama[h+offset[1]][w + offset[0]] = image1[h][w]

  ## TODO: Draw the common feature keypoints.

  return panorama

And my results:

1st image : First image

2nd image : Second Image

Stitched image : Stitched result

What am I doing wrong?

Upvotes: 2

Views: 3176

Answers (3)

Floris
Floris

Reputation: 21

Well, I don't know a lot about Python but basically I had the some problem. To solve the size issues I did the following:

perspectiveTransform( obj_original_corners, scene_corners, homography);

After that, I just searched in both images the smallest_X, smallest_Y, biggest_X and biggest_Y.

These numbers I then used in:

cv::warpPerspective(img_2,WarpedImage,homography,cv::Size(biggestX-smallestX,biggestY-smallestY));

So in that case the new image itself will have the proper size even if the 2nd image has a negative x or negative y.

Only thing I'm still struggling with myself at this moment is how to apply the shift to warpPerspective because now part of my image is cutoff due to negative numbers.

Upvotes: 1

Stawman
Stawman

Reputation: 31

if (homography*(0,0,1))[0][1] > 0:
    offset[0] = 0
if (homography*(0,0,1))[1][2] > 0:
    offset[1] = 0

Your code is wrong.The right one as following:

if (homography*(0,0,1))[0][2] > 0:
    offset[0] = 0
if (homography*(0,0,1))[1][2] > 0:
    offset[1] = 0

Upvotes: 2

Stawman
Stawman

Reputation: 31

Accordding to stitching,All your process are right.The result is because your source picture.

for h in range(h1):
  for w in range(w1):
    if image1[h][w][0] != 0 or image1[h][w][3] != 0 or image1[h][w][4] != 0:
        panorama[h+offset[1]][w + offset[0]] = image1[h][w]

The operation only filter the pixel ,whose color is zero.In fact ,some pixel seems like black,but it is not pure black and very near black. So these seem black pixel will not filter out by your program.

Upvotes: 0

Related Questions