Reputation: 2827
I'm looking for help on how to increase the speed of this calculation. What I'm trying to do is access each pixel and do some math on it, then create a new image with the new pixel calculations. I'm running this through a few thousands of small images which takes 1hr+. Any help would be appreciated, thanks.
image=cv2.imread('image.png')
height, width, depth = image.shape
for i in range(0, height):
for j in range (0, width):
B = float(image.item(i,j,0)) #blue channel of image
R=float(image.item(i,j,2)) #red channel of image
num = R-B
den = R+B
if den == 0:
NEW=1
else:
NEW = ((num/den)*255.0)
NEW = min(NEW,255.0)
NEW = max(NEW,0.0)
image[i,j] = NEW #Sets all BGR channels to NEW value
cv2.imwrite('newImage.png',image)
Upvotes: 0
Views: 252
Reputation: 879351
Remove the double for-loop
. The key to speed with NumPy is to operate on the whole array at once:
image = cv2.imread('image.png')
height, width, depth = image.shape
image = image.astype('float')
B, G, R = image[:, :, 0], image[:, :, 1], image[:, :, 2]
num = R - B
den = R + B
image = np.where(den == 0, 1, (num/den)*255.0).clip(0.0, 255.0)
cv2.imwrite('newImage.png',image)
By calling NumPy functions on whole arrays (rather than doing Python operations on scalar pixel values), you off-load most of the computational work to fast C/C++/Cython (or Fortran) compiled code called by the NumPy functions.
Upvotes: 4