SMD
SMD

Reputation: 131

Digital image processing of corn kernels

I am trying to identify and count insect-infested corn kernels from good or healthy corn kernels. I have done the thresholding up until drawing contours around all the corn kernels in the image.

insect infested (with hole and fading yellow color) and good corn kernels

FYI, the insect-infested kernels have holes and fading yellow color. How should I get the percentage of infested kernels from an image with the infested and good kernels? I am also open to other suggestions.

Upvotes: 13

Views: 903

Answers (3)

Tomer Geva
Tomer Geva

Reputation: 1834

I will offer a solution which implements one of the most fundamental ideas of image processing which is feature representation of objects. In the following example I will show how we can:

  1. Remove the background of the corn kernels
  2. Extract the centroid location of each corn kernel using Green's theorem
  3. Convert each corn kernel from an RGB Region of interest to a histogram
  4. Allocate similar labels to similar kernels using the histogram representation of each kernel and the k-means algorithm.

I will walk through the stages of the algorithm along with results of each stage and the code will be attached at the end

Project infrastructure

Our little project will be conveniently organized under the CornClassifier class. The first stage will be to import the needed libraries and setup the __init__() method.

Each of the parameterrs defined in the __init__() section will be used during the implementation.

To get things started, we will first read the image and save it locally under the CornClassifier parameters for convenience, both in color and in grayscale. Therefore we will write the load_image function, which will make our class infrastructure look as follows:

class CornClassifier:
    def __init__(self, image):
        self.path  = image
        # Image
        self.image           = None
        self.image_grayscale = None
        # Masking parameters
        self.ret  = None
        self.mask = None
        self.masked_image     = None
        self.masked_image_lab = None
        # Corn centroid parameters
        self.centroid_tuples = []
        self.centroid_x      = []
        self.centroid_y      = []
        self.contours        = []  # Saving the contours for the histogram computations
        # Corn histograms
        self.corn_histograms = []

    def load_image(self, show=False):
        """
        :param show: Plotting the image to screen
        :return: loading the image from the path to the attribute `image`
        """
        self.image           = cv2.imread(self.path, cv2.IMREAD_COLOR)
        self.image_grayscale = cv2.imread(self.path, cv2.IMREAD_GRAYSCALE)
        if show:
            plt.imshow(self.image[:,:,[2,1,0]])  # cv2.imread flips the channel order
            plt.show()

Background removal

In this section we will remove the background thus allowing better separation between "good" and "bad" corn kernels. This will be done via utilizing the fact that the background is black whereas the corn kernels are not. The main steps of the section will be:

  1. perform Gaussian Blur to the grayscale image. This will blur the corn kernels a little while making the white shimmer of the black surface darker. This will help separate the background from the corn kernels.

  2. Perform thresholding on the blurred image using Otsu's method which is the preferable choice in the case where we have a black background and white foreground, as is the case of the grayscale image (you can read more on this here).

  3. Assuming that the corn kernels are clearly separated, we will find all the different contours in the binary image output of stage (2). With each contour we will fill the content of each shape to allow better masking of each corn kernel.

  4. After creating the mask, we will apply the mask and change the color-space from RGB to a color-space which allows better separation of the "good" kernels from the "bad" ones. After playing around with some color-spaces, the best one I found is the LAB space which consists of:

    • Lightness (intensity)
    • A - color component ranging from Green to Magenta
    • B - color component ranging from Blue to Yellow

You can read more on the color-spaces available here

This will be implemented in the remove_background function (see code below). The result of this background removal if the following mask: enter image description here

And the resulted masked image (in RGB) will be: enter image description here

Let us note that there are still small artifacts which will be dealt with in the following functions.

Isolating The corn from the remaining artifacts

In this section we will remove any residual artifacts. We will do so by the observation that the resulted masked image after the background removal is almost perfect and any remaining artifacts are small and can be represented by polygons with small number of faces (or corners). Therefore, we will create the isolating_corn function to do just that. The function iterates over all the contours in the mask and discards contours which does not have more than 20 corners in the representing polygon. The polygons which pass the test as saved in the CornClassifier countours parameter and the centroids of each corn kernel is computed using the moments of each contour (Using Green's theorem, the theory behind this is a bit complicated but understandable, you can read more here)

After applying this function we can see that all the artifacts have been discarded, as seen in the Figure below. Is there were any artifact remaining we would see a centroid where there is not any corn kernels. enter image description here

Corn kernel representation

In this section the most important part of the project is happening, we will represent each corn kernel as an equivalent pixel histogram. since the LAB color-space has an 3 color channels, a naïve approach will represent each corn kernel as a (255*3)X1 = 765X1 vector (not including the black equivalent component of the LAB color-space to ignore the background). An example of a few of the histograms is given below. We can see that the green and blue histograms are somewhat similar and the red histogram is different that the other two. enter image description here

Nevertheless, we can do better. The corn kernels are not pure Lambertian surfaces (you can read more about Lambertian reflection here) we will assume that they ar. This means that a change in lighting can be caused by the rotation of each kernel and the shape of each kernel, resulting in a slightly different reflection and a slightly different color. Therefore, we will group together close colors and reduce the total number of bins in each channel from 256 to 16, resulting in a (15*3)X1 = 45X1 histogram vector. The same corn kernel will now be represented in the following histograms: enter image description here Each histogram will be saved to the future use in the clustering algorithm. The implementation of this will be in the compute_histograms function (see code below). We can see that the histograms representation the the corn kernels can be further improved as we can see that some bins have zero value across all corn kernels, but for now we can leave this be.

Clustering

Up until this point we set up the stage for the main event! now that we have our representation of the corn kernels we can group them up into distinct groups. Since we know the number of groups we want to have (2 for "good" and "bad") we can use the K-means algorithm with K=2. There are numerous explanations regarding this algorithm so I will not leave a reference here. The implementation of this will be as follows. We fit the k-means model using our corn_histograms parameter and using n_clusters=2. we then extract the matching labels w.r.t each of the histograms and scatter the centroids of each cluster with different color over the original picture. This will be implemented in the classify_corn (see code below) The result is seen below: enter image description here

We can see that the corn kernel have been divided into two clusters where one cluster (red centroids) show the good corn kernels and the other cluster (blue centroids) show the red kernels. After the clustering we have a labels vector, allocating each kernel to one is the two clusters discovered by the K-means algorithm. Computing the percentages of each of the two groups as can be done as follows:

print(f'Total corn kernels detected: {len(labels)}')
print(f'Number of "Blue" group kernels: {np.sum(labels == 1)} ; Percentage: {np.around(100 * np.sum(labels == 1) / len(labels), 2)} %')
print(f'Number of  "Red" group kernels: {np.sum(labels == 0)} ; Percentage: {100 - np.around(100 * np.sum(labels == 1) / len(labels), 2)} % ')

Resulting in:

Total corn kernels detected: 70
Number of "Blue" group kernels: 27 ; Percentage: 38.57 %
Number of  "Red" group kernels: 43 ; Percentage: 61.43 % 

Summary

This project sums up two very important aspects in computer vision:

  1. Feature representation of object, which was in this case the histogram representation of the corn kernels
  2. Clustering object via their feature representation using the k-means algorithm

For convenience sake, The full CornClassifier class is written below, as well as the calling for the functions:

import cv2
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans

class CornClassifier:
    def __init__(self, image):
        self.path  = image
        # Image
        self.image           = None
        self.image_grayscale = None
        # Masking parameters
        self.ret  = None
        self.mask = None
        self.masked_image     = None
        self.masked_image_lab = None
        # Corn centroid parameters
        self.centroid_tuples = []
        self.centroid_x      = []
        self.centroid_y      = []
        self.contours        = []  # Saving the contours for the histogram computations
        # Corn histograms
        self.corn_histograms = []

    def load_image(self, show=False):
        """
        :param show: Plotting the image to screen
        :return: loading the image from the path to the attribute `image`
        """
        self.image           = cv2.imread(self.path, cv2.IMREAD_COLOR)
        self.image_grayscale = cv2.imread(self.path, cv2.IMREAD_GRAYSCALE)
        if show:
            plt.imshow(self.image[:,:,[2,1,0]])  # cv2.imread flips the channel order
            plt.show()

    def remove_background(self, show=False):
        """
        :param show: Plotting the mask to screen
        :return:
        1. Performing gaussian filtering to blur the noise of the black background
        2. Performing Otsu's thresholding - practical example is given in:
         https://opencv24-python-tutorials.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_thresholding/py_thresholding.html#otsus-binarization
        3. Fill contour to better mask the image
        4. Mask the image ans change the colorspace
        """
        image         = self.image_grayscale.copy()
        # Step 1
        blurred_image = cv2.GaussianBlur(image, (5,5), 0)
        # Step 2
        self.ret, self.mask = cv2.threshold(blurred_image, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
        # Step 3 - Filling holes in the corn kernel
        contours, hierarchies = cv2.findContours(self.mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
        for c in contours:
            cv2.fillPoly(self.mask, pts=[c], color=(255, 255, 255))
        # Step 4
        self.masked_image     = cv2.bitwise_and(self.image, self.image, mask=self.mask)
        self.masked_image_lab = cv2.cvtColor(self.masked_image, cv2.COLOR_BGR2LAB)
        if show:
            plt.figure()
            plt.imshow(self.mask, cmap='gray')  # cv2.imread flips the channel order
            plt.figure()
            plt.imshow(self.masked_image[:,:,[2,1,0]])
            plt.show()

    def isolating_corn(self, show=False):
        """
        :param show:
        :return: Extracting the coordinates of each corn object, assuming all corn kernels
        are seperated from each other. We compute the centroids my computing the moments of each
        corn kernel (Green's theorem)
        https://learnopencv.com/find-center-of-blob-centroid-using-opencv-cpp-python/
        """
        mask = self.mask.copy()
        # Finding the different contours
        contours, hierarchies  = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
        for c in contours:
            # removing small contours
            if c.shape[0] < 20:
                continue
            # calculate moments for each contour
            M = cv2.moments(c)
            # calculate x,y coordinate of center
            try:
                cX = int(M["m10"] / M["m00"])
                cY = int(M["m01"] / M["m00"])
                self.centroid_tuples.append((cX, cY))
                self.centroid_x.append(cX)
                self.centroid_y.append(cY)
                self.contours.append(c)
            except ZeroDivisionError:
                pass
        if show:
            plt.figure()
            plt.imshow(self.masked_image[:,:,[2,1,0]])
            plt.scatter(self.centroid_x, self.centroid_y)
            plt.show()

    def compute_histograms(self, show=False):
        """
        :param show:
        :return: Computing the histogram for each corn kernel
        """
        for c in self.contours:
            # Creating an image with just that filled contour
            temp_mask = np.zeros_like(self.image)
            cv2.fillPoly(temp_mask, pts=[c], color=(255, 255, 255))
            single_corn = cv2.bitwise_and(self.masked_image_lab, temp_mask)
            # Generating histograms, avoiding the 0 values
            hist0 = cv2.calcHist([single_corn],[0],None,[16],[0,256])
            hist1 = cv2.calcHist([single_corn],[1],None,[16],[0,256])
            hist2 = cv2.calcHist([single_corn],[2],None,[16],[0,256])
            total_hist = np.squeeze(np.vstack((hist0[1:], hist1[1:], hist2[1:])))
            self.corn_histograms.append(total_hist / sum(total_hist))
        if show:
            plt.figure()
            plt.stem(self.corn_histograms[10], markerfmt='b', basefmt='b')
            plt.stem(self.corn_histograms[1], markerfmt='r', basefmt='r')
            plt.stem(self.corn_histograms[-1], markerfmt='g', basefmt='g')
            plt.show()

    def classify_corn(self):
        kmeans = KMeans(n_clusters=2, init='k-means++', random_state=0).fit(self.corn_histograms)
        labels = kmeans.labels_
        print(f'Total corn kernels detected: {len(labels)}')
        print(f'Number of "Blue" group kernels: {np.sum(labels == 1)} ; Percentage: {np.around(100 * np.sum(labels == 1) / len(labels), 2)} %')
        print(f'Number of  "Red" group kernels: {np.sum(labels == 0)} ; Percentage: {100 - np.around(100 * np.sum(labels == 1) / len(labels), 2)} % ')
        plt.imshow(self.image[:,:,[2,1,0]])
        plt.scatter(np.array(self.centroid_x)[labels.astype(bool)], np.array(self.centroid_y)[labels.astype(bool)], c='b')
        plt.scatter(np.array(self.centroid_x)[~labels.astype(bool)], np.array(self.centroid_y)[~labels.astype(bool)], c='r')
        plt.show()

if __name__ == '__main__':
    corn = CornClassifier('./corn.jpg')
    corn.load_image(False)
    corn.remove_background(False)
    corn.isolating_corn(False)
    corn.compute_histograms(True)
    corn.classify_corn()

Upvotes: 12

Red
Red

Reputation: 27577

The Concept

First detect the total number of kernels in the image. Given the dark background, a simple binary threshold and an area filter (to filter out noise) will suffice as the processing, before passing the image into the cv2.findContours() method and getting the length of the results.

Next, use a color range to mask out the infested kernels. The program below uses an LAB color mask, with the lower range being np.array([0, 0, 150]), and the upper range being np.array([255, 255, 255]). This will not mask out the infested kernels completely, but using the area filter will allow us to filter them out based on their decreased area in the mask.

From Wikipedia's page on CIELAB color space:

The CIELAB color space, also referred to as Lab* , is a color space defined by the International Commission on Illumination (abbreviated CIE) in 1976.

Finally, we'll be able to draw on the contours of the good kernels, and calculate the percentage of infested kernels to total kernels.

The Code:

import cv2
import numpy as np

def large(cnt):
    return cv2.contourArea(cnt) > 5000

def get_contours(img):
    return cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0]

def get_mask(img):
    img_lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
    lower = np.array([0, 0, 150])
    upper = np.array([255, 255, 255])
    return cv2.inRange(img_lab, lower, upper)

img = cv2.imread("corn.jpg")
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(img_gray, 100, 255, cv2.THRESH_BINARY)
total = len(list(filter(large, get_contours(thresh))))

mask = get_mask(img)
contours = list(filter(large, map(cv2.convexHull, get_contours(mask))))
cv2.drawContours(img, contours, -1, (0, 255, 0), 3)

infested = total - len(contours)
print(f"Total Kernels: {total}")
print(f"Infested Kernels: {infested}")
print(f"Infested Percentage: {round(infested / total * 100)}%")

cv2.imshow("Result", cv2.resize(img, (700, 700)))
cv2.waitKey(0)

The Output:

Total Kernels: 70
Infested Kernels: 27
Infested Percentage: 39%

enter image description here

The Explanation

  1. Import the necessary libraries:
import cv2
import numpy as np
  1. Define a function, large(), that will take in a contour and return True if the area of the contour is greater than 5000 (adjust this value accordingly when working with images of different sizes):
def large(cnt):
    return cv2.contourArea(cnt) > 5000
  1. Define a function, get_contours(), that will take in a binary image and return the contours of the image:
def get_contours(img):
    return cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0]
  1. Define a function, get_mask(), that will take in an image, convert it to LAB color space, and return the mask for the image with the lower range 0, 0, 150 and the upper range 255, 255, 255:
def get_mask(img):
    img_lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
    lower = np.array([0, 0, 150])
    upper = np.array([255, 255, 255])
    return cv2.inRange(img_lab, lower, upper)
  1. Read in the image file. To find the total number of kernels in the image, convert the image to grayscale, threshold it so that the background is masked out, and find the contours using the get_contours() function we defined. Also, filter out any noise with the built-in filter() function, using the large() function we defined as the first argument. That way, we can use the built-in len() function to get the total number of kernels in the image:
img = cv2.imread("corn.jpg")
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(img_gray, 100, 255, cv2.THRESH_BINARY)
total = len(list(filter(large, get_contours(thresh))))

Resulting thresh for the image corn.jpg:

enter image description here

  1. Get the mask of the image using the get_mask() image we defined, get the contours of the mask, and also filter out any noise using the large() function. With the filtered contours, call the cv2.drawContours() method to highlight the good kernels (purely for visualization):
mask = get_mask(img)
contours = list(filter(large, map(cv2.convexHull, get_contours(mask))))
cv2.drawContours(img, contours, -1, (0, 255, 0), 3)

Resulting mask for the image corn.jpg:

enter image description here

I ran the program again with some edits so that the contours would be drawn on the mask, for a better understanding of the filtering process:

enter image description here

  1. Finally, we can calculate the percentage of infested kernels to the total amount of kernels in the image, and print the results:
infested = total - len(contours)
print(f"Total Kernels: {total}")
print(f"Infested Kernels: {infested}")
print(f"Infested Percentage: {round(infested / total * 100)}%")

Tools

If you happen to have other images that you would like to apply the same algorithm to, but the shades of the other images are rather different, you can use OpenCV Trackbars to adjust the lower and upper bounds of the color mask (as well as any other value that might need tweaking). Here is a program that allows you to change the LAB ranges through trackbars, and shows the resulting images in real-time:

import cv2
import numpy as np

def large(cnt):
    return cv2.contourArea(cnt) > 5000

def get_contours(img):
    return cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0]

def get_mask(img, l_min, l_max, a_min, a_max, b_min, b_max):
    img_lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
    lower = np.array([l_min, a_min, b_min])
    upper = np.array([l_max, a_max, b_max])
    return cv2.inRange(img_lab, lower, upper)

def show(imgs, win="Image", scale=1):
    imgs = [cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) if len(img.shape) == 2 else img for img in imgs]
    img_concat = np.concatenate(imgs, 1)
    h, w = img_concat.shape[:2]
    cv2.imshow(win, cv2.resize(img_concat, (int(w * scale), int(h * scale))))

def put_text(img, text, y):
    cv2.putText(img, text, (20, y), cv2.FONT_HERSHEY_COMPLEX, 2, (255, 128, 0), 4)
    
d = {"L min": (0, 255),
     "L max": (255, 255),
     "A min": (0, 255),
     "A max": (255, 255),
     "B min": (150, 255),
     "B max": (255, 255)}

cv2.namedWindow("Track Bars")
for i in d:
    cv2.createTrackbar(i, "Track Bars", *d[i], id)

img = cv2.imread("corn.jpg")
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(img_gray, 100, 255, cv2.THRESH_BINARY)
total = len(list(filter(large, get_contours(thresh))))
while True:
    img_copy = img.copy()
    mask = get_mask(img, *(cv2.getTrackbarPos(i, "Track Bars") for i in d))
    contours = list(filter(large, map(cv2.convexHull, get_contours(mask))))
    cv2.drawContours(img_copy, contours, -1, (0, 255, 0), 3)
    infested = total - len(contours)
    put_text(img_copy, f"Total Kernels: {total}", 50)
    put_text(img_copy, f"Infested Kernels: {infested}", 120)
    put_text(img_copy, f"Infested Percentage: {round(infested / total * 100)}%", 190)
    show([img_copy, mask], "Results", 0.3)
    if cv2.waitKey(1) & 0xFF == ord("q"):
        break

Demonstration of the program (speed x2):

enter image description here

Upvotes: 5

nathancy
nathancy

Reputation: 46650

Here's an approach using HSV color thresholding to differentiate between the good and infected kernels. Since we know that a good kernel is yellow and an infected kernel is gray, we can segment the desired objects using a lower/upper color range.

  1. Count total number of kernels and calculate average area. We load the image, convert to grayscale, Otsu's threshold to obtain a binary image, and then perform morphological operations to remove noise. Next we find contours and keep track of the total number_of_kernels and calculate the average_area of each kernel.

  2. HSV color thresholding to isolate good/infected kernels. Next we convert the image to HSV then perform HSV color thresholding to isolate the good kernels. We perform additional morphological operations to remove noise.

  3. Calculate the number of good kernels and percentage infected. The idea is that we can use some arbitrary area threshold ratio, say 0.75 or 75%, where if the kernel area is less than this threshold area we can conclude that it is infected. In other words, any individual kernel needs to have an area of at least average_area * area_threshold to be identified as a good kernel. We iterate through contours and keep a counter representing kernels that pass our filter. From here we calculate the number of infected kernels and it's percentage.


Here's a visualization of the pipeline

Binary image -> morphological operations

Number of kernels: 70
Average kernel area: 6854.864

Next we HSV color threshold to isolate good kernels using this HSV color range

lower = np.array([0, 70, 97])
upper = np.array([179, 255, 255])

Good kernels -> Detected kernels

Finally we can calculate the number of infected kernels and its percentage

Number of good kernels: 36
Number of infected: 34
Percentage infected: 48.571%

You can adjust the HSV lower/upper ranges and the area_threshold ratio to fine tune your result

Code

import cv2
import numpy as np

# Load image, convert to grayscale, Otsu's threshold, morph operations to remove noise
image = cv2.imread('1.jpg')
original = image.copy()
result_mask = np.zeros(image.shape, dtype=np.uint8)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
erode = cv2.erode(thresh, kernel, iterations=2)
morph = cv2.morphologyEx(erode, cv2.MORPH_CLOSE, kernel, iterations=3)

# Count number of kernels and average kernel area
number_of_kernels = 0
average_area = 0
cnts = cv2.findContours(morph, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
    # Filter out tiny specs of noise
    area = cv2.contourArea(c)
    if area > 10:
        number_of_kernels += 1
        average_area += area

average_area /= number_of_kernels
print('Number of kernels:', number_of_kernels)
print('Average kernel area: {:.3f}'.format(average_area))

# Perform HSV color thresholding
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
lower = np.array([0, 70, 97])
upper = np.array([179, 255, 255])
mask = cv2.inRange(hsv, lower, upper)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3,3))
cleanup = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel, iterations=3)

# Find number of good kernels using an area threshold ratio relative to average kernel area
cnts = cv2.findContours(cleanup, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
area_threshold = 0.75
good_kernels = 0
for c in cnts:
    area = cv2.contourArea(c)
    if area > area_threshold * average_area:
        cv2.drawContours(image, [c], -1, (36,255,12), 4)
        cv2.drawContours(result_mask, [c], -1, (255,255,255), -1)
        good_kernels += 1

# Calculate number of infected kernels
result_mask = cv2.cvtColor(result_mask, cv2.COLOR_BGR2GRAY)
result = cv2.bitwise_and(original, original, mask=result_mask)
number_of_infected = number_of_kernels - good_kernels

print('Number of good kernels:', good_kernels)
print('Number of infected:', number_of_infected)
print('Percentage infected: {:.3f}%'.format((number_of_infected/number_of_kernels) * 100)) 

cv2.imshow('image', image)
cv2.imshow('thresh', thresh)
cv2.imshow('morph', morph)
cv2.imshow('result', result)
cv2.waitKey()

Here's a simple HSV color thresholder script to determine the lower/upper color ranges using trackbars. Just change the image path.

import cv2
import numpy as np

def nothing(x):
    pass

# Load image
image = cv2.imread('1.jpg')

# Create a window
cv2.namedWindow('image')

# Create trackbars for color change
# Hue is from 0-179 for Opencv
cv2.createTrackbar('HMin', 'image', 0, 179, nothing)
cv2.createTrackbar('SMin', 'image', 0, 255, nothing)
cv2.createTrackbar('VMin', 'image', 0, 255, nothing)
cv2.createTrackbar('HMax', 'image', 0, 179, nothing)
cv2.createTrackbar('SMax', 'image', 0, 255, nothing)
cv2.createTrackbar('VMax', 'image', 0, 255, nothing)

# Set default value for Max HSV trackbars
cv2.setTrackbarPos('HMax', 'image', 179)
cv2.setTrackbarPos('SMax', 'image', 255)
cv2.setTrackbarPos('VMax', 'image', 255)

# Initialize HSV min/max values
hMin = sMin = vMin = hMax = sMax = vMax = 0
phMin = psMin = pvMin = phMax = psMax = pvMax = 0

while(1):
    # Get current positions of all trackbars
    hMin = cv2.getTrackbarPos('HMin', 'image')
    sMin = cv2.getTrackbarPos('SMin', 'image')
    vMin = cv2.getTrackbarPos('VMin', 'image')
    hMax = cv2.getTrackbarPos('HMax', 'image')
    sMax = cv2.getTrackbarPos('SMax', 'image')
    vMax = cv2.getTrackbarPos('VMax', 'image')

    # Set minimum and maximum HSV values to display
    lower = np.array([hMin, sMin, vMin])
    upper = np.array([hMax, sMax, vMax])

    # Convert to HSV format and color threshold
    hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
    mask = cv2.inRange(hsv, lower, upper)
    result = cv2.bitwise_and(image, image, mask=mask)

    # Print if there is a change in HSV value
    if((phMin != hMin) | (psMin != sMin) | (pvMin != vMin) | (phMax != hMax) | (psMax != sMax) | (pvMax != vMax) ):
        print("(hMin = %d , sMin = %d, vMin = %d), (hMax = %d , sMax = %d, vMax = %d)" % (hMin , sMin , vMin, hMax, sMax , vMax))
        phMin = hMin
        psMin = sMin
        pvMin = vMin
        phMax = hMax
        psMax = sMax
        pvMax = vMax

    # Display result image
    cv2.imshow('image', result)
    if cv2.waitKey(10) & 0xFF == ord('q'):
        break

cv2.destroyAllWindows()

Upvotes: 8

Related Questions