Sean Mackesey
Sean Mackesey

Reputation: 10939

Convert a 2D numpy array into a 3d numpy array representing a grayscaled image

I am using OpenCV with numpy and Python. I have a 2D uint8 numpy array. The values represent the local densities of over-threshold pixels from a thresholded image. I would like to convert this into a 3-dimensional RGB image with all RGB values set the same, so basically a grayscale image where the maximum value gets (255,255,255) and everything else is scaled accordingly. (I need RGB because this seems to be the only kind of image I can write to video with OpenCV). What is the most efficient way to do this?

Upvotes: 3

Views: 6559

Answers (1)

jabaldonedo
jabaldonedo

Reputation: 26552

I assume that you have a 2D grayscale image, something like:

>>> import cv2
>>> img_gray = cv2.imread('./440px-Lenna.png', cv2.CV_LOAD_IMAGE_GRAYSCALE) # image take from http://en.wikipedia.org/wiki/Lenna

Now img_gray contains a gray scale image:

>>> print(img_gray.shape)
 (440, 440)

You can convert the image to BGR in an efficient way using cv2.cvtColor:

>>> img_bgr = cv2.cvtColor(img_gray, cv2.COLOR_GRAY2BGR)
>>> print(img_bgr.shape)
 (440, 440, 3)

Upvotes: 4

Related Questions