Reputation: 1
I'm working on a project that should do the following: "Make a Python program that can find diagonal edges in an image. Input: Greyscale image Output: Binary image where the diagonal edges are white (255) and the rest of the pixels black (0)"
This is the
And this is my
It's not exactly what I'm looking for and I think I found the problem. My np.array (SobelKernel) does not use the negative values when multiplied with pixelValue (checked with a print). Any idea how to fix that?
Here's my code:
import cv2
import numpy as np
import matplotlib.pyplot as plt
img = cv2.imread('LENNA.JPG')
height = img.shape[1]
width = img.shape[0]
out = np.zeros(img.shape, np.uint8)
SobelKernel = np.array([[2, 1, 0],
[1, 0, -1],
[0, -1, -2]], np.int8)
for y in range(1, height-1):
for x in range(1, width-1):
temp = 0
for j in range(2):
for k in range(2):
pixValue = img[x+ j - 1][y + k - 1]
kernelValue = SobelKernel[j][k]
temp = temp + pixValue*kernelValue
#print(Sobelkernel[j][k])
out[x, y] = temp
cv2.imshow('test', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
Upvotes: 0
Views: 983
Reputation: 22023
The issue is that your resulting image may go outside the boundaries of int8
.
Save your result in a out = np.zeros(img.shape, np.int16)
instead, and then work on values below 0
or above 255
.
Then after you handle the values outside the range, cast the array back to np.unit8
before you save it (otherwise, it will be considered as a 16bits image, not a 8bits one).
Upvotes: 2