Reputation: 53916
Using this code :
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(cv2.imread('badger.jpeg' , cv2.IMREAD_GRAYSCALE))
an image is read as greyscale and plotted to screen.
The image is plotted as :
This does not appear to be grayscale as there is colour that does not range form white to grey contained in the rendered image ?
My code is correct to read the image as grayscale using the IMREAD_GRAYSCALE
parameter ?
The image is located at : https://sciencing.com/difference-between-badger-wolverine-8645505.html
Upvotes: 2
Views: 717
Reputation: 51425
The image is indeed flattened to grayscale if you use cv2.IMREAD_GRAYSCALE
(you can test this using cv2.imread('im.jpg', cv2.IMREAD_GRAYSCALE).shape
and cv2.imread('im.jpg').shape
, and see that the former is a 3-d array and the latter is a 2-d array)
The issue is with the way matplotlib
chooses to map your pixel values. When using plt.imshow()
, it is using the default colormap (which is viridis
, for some reason). This means pixel intensities / values will be mapped to the following:
You can change cmap
to gray
, in order to map them to the following:
plt.imshow(cv2.imread('badger.jpeg', cv2.IMREAD_GRAYSCALE), cmap='gray')
plt.show()
Upvotes: 5