Reputation: 1455
I am working on a program in python that makes use of a function very similar to the addWeighted
function in openCV. The difference is that it doesn't actually add the numpy arrays representing the images, instead, it takes whichever pixel is brighter at any particular coordinate and uses that value.
What I have been finding, however, is that despite the fact that these functions do very similar things, the addWeighted
function is much faster. So my question is, how can I modify my current solution to be equally as fast? Is there a way I can use the multiprocessing
module, or something similar?
Here is the code:
image = np.zeros(image_1.shape)
for row_index, row in enumerate(image_1):
for col_index, col in enumerate(row):
pixel_1 = image_1[row_index, col_index]
pixel_2 = image_2[row_index, col_index]
sum_1 = int(pixel_1[0]) + int(pixel_1[1]) + int(pixel_1[2])
sum_2 = int(pixel_2[0]) + int(pixel_2[1]) + int(pixel_2[2])
if sum_2 > sum_1:
image[row_index, col_index] = pixel_2
else:
image[row_index, col_index] = pixel_1
Where image_1
and image_2
are both numpy arrays representing images, both with the same shape (720, 1280, 3)
.
Upvotes: 2
Views: 710
Reputation: 221664
One vectorized approach would be -
mask = image_2.astype(int).sum(-1) > image_1.astype(int).sum(-1)
out = np.where(mask[...,None], image_2, image_1)
Steps :
Convert to int
dtypes, sum along the last axis and perform element-wise comparisons. This would give us a mask.
Use np.where
with this mask, extended to the same no. of dims as input arrays to do the choosing. This employs the concept of NumPy broadcasting
to do the choosing in a vectorized manner. So, that's worth a good look.
Note: Alternatively, we can also use keepdims=True
to keep the no. of dims while summing and thus avoid extending dims in the next step.
Upvotes: 2