Reputation: 2313
I'm trying to paint part of an image as black and white using OpenCV2 and Python3. This is the code I'm trying:
(x, y, w, h) = cv2.boundingRect(c)
cv2.rectangle(frame, (x,y), (x+w,y+h),0,0)
sub_face = frame[y:y+h, x:x+w]
# apply a gaussian blur on this new recangle image
# sub_face = cv2.GaussianBlur(sub_face,(9, 9), 30, borderType = 0)
sub_face = cv2.cvtColor(sub_face, cv2.COLOR_BGR2GRAY)
# merge this blurry rectangle to our final image
result_frame[y:y+sub_face.shape[0], x:x+sub_face.shape[1]] = sub_face
When I apply the GaussianBlur method, it works properly, but when I try the cvtColor method it fails with a message (on the last line): could not broadcast input array from shape (268,182) into shape (268,182,3). What am I doing wrong?
The c variable in first line is a contour (from motion detection).
I'm new into Python and OpenCV.
Thanks!
Upvotes: 1
Views: 486
Reputation: 104555
You are trying to assign a single channel that results from your cv2.cvtColor
call to three channels at once as result_frame
is a RGB / three channel image. You are probably wanting to assign the single channel to all three channels. One way to do this cleanly is to exploit NumPy broadcasting by creating a singleton channel in the third dimension, then broadcasting the result over all channels. Since you are using the cv2
interface to OpenCV, the native datatype used for manipulating images is a NumPy array:
# merge this blurry rectangle to our final image
result_frame[y:y+sub_face.shape[0], x:x+sub_face.shape[1]] = sub_face[:,:,None]
The :
operation in this context accesses all values in a particular dimension. In this case, we want the first and second dimensions. Therefore, sub_face[:,:,None]
will make your single channel image 3D with the third dimension being a singleton (i.e. 1). Using NumPy broadcasting will then broadcast this single channel image to all channels simultaneously.
Note that I didn't have to explicitly access the third dimension when assigning to result_frame
. That is because result_frame[y:y+sub_face.shape[0], x:x+sub_face.shape[1]]
and result_frame[y:y+sub_face.shape[0], x:x+sub_face.shape[1],:]
are the same thing as dropping indexing after the last dimension you specify implicitly assumes :
.
Upvotes: 2
Reputation: 11941
You converted sub_face to a single channel image, but result_frame is a 3 channel image.
In the last line you are trying to assign a single channel array to a 3 channel slice.
You could do this:
result_frame[y:y+sub_face.shape[0], x:x+sub_face.shape[1], 0] = sub_face
result_frame[y:y+sub_face.shape[0], x:x+sub_face.shape[1], 1] = sub_face
result_frame[y:y+sub_face.shape[0], x:x+sub_face.shape[1], 2] = sub_face
Upvotes: 0