Robot Jung
Robot Jung

Reputation: 397

How can I apply the contours of a downsized image to the original image?

I have a perfect code for finding the contours with OpenCV. But, my code processes a downsized image for improving the computational speed. How can I apply the contours of a downsized image to the original image?

This is my Python code:

# Image Read and Resizing
source_image = cv.imread(image_path)
copied_image = source_image.copy()
copied_image = imutils.resize(copied_image, height=500)

# Apply GaussianBlur + OTSU-Thresholding
grayscale_image = cv.cvtColor(copied_image, cv.COLOR_BGR2GRAY)
grayscale_image = cv.GaussianBlur(grayscale_image, (5, 5), 0)
ret, grayscale_image = cv.threshold(grayscale_image, 200, 255, cv.THRESH_BINARY + cv.THRESH_OTSU)

# Find Contours
contours, hierarchy = cv.findContours(grayscale_image, cv.RETR_TREE, cv.CHAIN_APPROX_SIMPLE)
contour_sizes = [(cv.contourArea(contour), contour) for contour in contours]
biggest_contour = max(contour_sizes, key=lambda x: x[0])[1]

# Crop Image
x, y, w, h = cv.boundingRect(biggest_contour)
cropped_image = copied_image[y:y + h, x:x + w]

copied_image is smaller than the source_image. I only used the largest contour. Now, I want to apply the found contour with the source_image. However, in my code, the acquired contour is based on the copied_image.

Upvotes: 1

Views: 564

Answers (1)

HansHirse
HansHirse

Reputation: 18925

If you can live with an (in)accuracy of 1 or 2 pixels, a quite simple solution would be to just multiply the x, y, w, h values of your bounding rectangle with the corresponding scaling factors:

import cv2
import numpy as np

# Set up some test image
image = np.zeros((400, 400), np.uint8)
image = cv2.circle(image, (160, 160), 80, 255, cv2.FILLED)

# Find contour, and determine original bounding rectangle
cnt_orig = cv2.findContours(image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0]
x, y, w, h = cv2.boundingRect(cnt_orig[0])
print('Original bounding rectangle: ', x, y, w, h)

# Downsize image
image_small = cv2.resize(image.copy(), (124, 287))

# Determine scaling factors
scale_x = image.shape[1] / image_small.shape[1]
scale_y = image.shape[0] / image_small.shape[0]

# Find contour, and determine reconstructed bounding rectangle w.r.t. the scaling factors
cnt_small = cv2.findContours(image_small, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0]
x, y, w, h = cv2.boundingRect(cnt_small[0]) * np.array([scale_x, scale_y, scale_x, scale_y])
print('Reconstructed bounding rectangle: ', x, y, w, h)

Output:

Original bounding rectangle:  80 80 161 161
Reconstructed bounding rectangle:  80.64... 79.44... 161.29... 161.67...

Notice: The used test image is very simple. The (in)accuracy might increase when finding more complex contours in more complex images.

----------------------------------------
System information
----------------------------------------
Platform:    Windows-10-10.0.16299-SP0
Python:      3.8.5
NumPy:       1.19.4
OpenCV:      4.4.0
----------------------------------------

Upvotes: 1

Related Questions