Manipal King
Manipal King

Reputation: 420

Python OpenCV : inverting colors in a numpy image array

I have been trying to manipulate the colors (BGR values) of a very simple 8 x 8 image (variable "abc"), but when I try to view the new image with the inverted colors (variable "target"), all I get is a black picture. Can anyone help me please?

I have even tried to check the two arrays for an exact match after changing the code so as to try to replicate the image one pixel at a time, and the condition equates to be True but the picture remains black.

I have posted the code below:

import cv2
import numpy as np    

abc = cv2.imread("new.png")

(x1, y1) = abc.shape[:2]

a1 = []
a2 = []
a3 = []

for i in range(x1):
    for d in range(y1):
        for g in range(3):
            if g == 0:
                a1.append(abc[i, d, g])
            elif g == 1:
                a2.append(abc[i, d, g])
            elif g == 2:
                a3.append(abc[i, d, g])

u = 0

target = np.empty(shape=(x1, y1, 3), dtype="int32")
for i in range(x1):
    for d in range(y1):
              target[i, d, 1] = a2[u]
              target[i, d, 2] = a3[u]
              target[i, d, 0] = a1[u]
              u = u + 1

if (abc == target).all():
    print "equal/match"

cv2.imshow('target', target)
cv2.waitKey(0)
cv2.destroyAllWindows()

Upvotes: 2

Views: 6792

Answers (1)

rayryeng
rayryeng

Reputation: 104514

I would like to point you to the documentation for cv2.imshow: http://docs.opencv.org/modules/highgui/doc/user_interface.html#imshow

Read what the notes say about the type of image that you are trying to display:

  • If the image is 8-bit unsigned, it is displayed as is.
  • If the image is 16-bit unsigned or 32-bit integer, the pixels are divided by 256. That is, the value range [0,255*256] is mapped to [0,255].
  • If the image is 32-bit floating-point, the pixel values are multiplied by 255. That is, the value range [0,1] is mapped to [0,255].

Your situation is the second point. What you are doing is that the input image is most likely unsigned 8-bit integer to begin with. Because you created an output image of type int (which is actually int32), this equates to an image of type 32-bit integer and so what is happening is that all of the values are being divided by 256. For an 8-bit unsigned integer type, all values are between 0 to 255, and so dividing by 256 thus makes all of your pixels black (i.e. [0,255] / 256 --> [0,0] assuming integer division).

To fix this, you need to properly create the right image type for your output. In your case, I'm going to assume that your input data type was uint8, which makes the most sense given what I see is happening, so simply change target so that the dtype is uint8:

target = np.empty(shape=(x1, y1, 3), dtype=np.uint8)

Minor Note

You can very efficiently accomplish what you want by slicing into the third dimension.

Therefore, target can very simply be the following without any use of for loops or temporary lists to copy over spatial coordinate tuples:

target = abc[:,:,[1,2,0]]

The added bonus is that target will maintain the same data type that was taken on by abc. Read up on numpy indexing and slicing here: http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html

Upvotes: 3

Related Questions