Reputation: 819
in a nutshell: I need a fast(precompiled) function like filter2d from OpenCV with double type output. Not integer.
Detail: I have numpy array which store the monochrome image from OpenCV.
I need to calculate the mean values of matrix for some square (for example) kernel like this:
kernel size = (3,3)
input array:
[[13 10 10 10]
[12 10 10 8]
[ 9 9 9 9]
[ 9 10 10 9]]
output array:
[[ 10.22222222 9.44444444]
[ 9.77777778 9.33333333]]
For example: 10.22222 = (13+10+10+12+10+10+9+9+9)/9
I write this function:
def smooth_filt(src,area_x,area_y):
y,x = src.shape
x_lim = int(area_x/2)
y_lim = int(area_y/2)
result = np.zeros((y-2*y_lim,x-2*x_lim), dtype=np.float64)
for x_i in range(x_lim,x-x_lim):
for y_i in range(y_lim,y-y_lim):
result[y_i-y_lim, x_i-x_lim] = np.mean(src[y_i-y_lim:y_i+area_y-y_lim,x_i-x_lim:x_i+area_x-x_lim])
return result
But this is not fast enough.
Please tell me if there is a faster way to calculate this.
Answer: I check all methods. You can see the code: http://pastebin.com/y5dEVbzX
And decide that blur is most powerful method it is almost independent on kernel size.
The graph of one image processing with different methods. testing set is 298 images.
Upvotes: 3
Views: 3153
Reputation: 80187
You can exploit integral function, that calculates the sum of values from (0,0) to (i,j) elements.
Using these integral images, you can calculate sum, mean, and standard deviation over a specific up-right or rotated rectangular region of the image in a constant time
If "kernel" size is constant M, multiply resulting integral matrix by 1/M^2
to simplify mean calculation.
To get sum in some window (x1,y1)-(x2,y2), just find
S((x1,y1)-(x2,y2)) = I(x1,y1) + I(x2,y2) - I(x1,y2) - I(x2,y1)
pseudocode:
integral(src, sum)
multvalue = 1/(kernelsize*kernelsize)
sum = sum * multvalue
for every (x = 0..n-kernelsize-1, y = 0..n-kernelsize-1)
mean[x,y] = sum[x, y]
+ sum[x + kernelsize, y + kernelsize]
- sum[x, y + kernelsize]
- sum[x + kernelsize, y]
Upvotes: 1
Reputation: 2154
If you interested in OpenCV solution: function which you need is cv2.blur
.
For most of cases it must works faster than convolution, since it has separate optimization for normalized kernels (sum of coefficient is equal to 1).
blurred = cv2.blur(img,(3,3))
See nice tutorial about smoothing here.
Upvotes: 3
Reputation: 5935
Take a look at scipy.signal.convolve2d
. It's pretty straightforward:
import numpy as np
import scipy.signal as ss
data = np.array([[13, 10, 10, 10],
[12, 10, 10, 8],
[ 9, 9, 9, 9],
[ 9, 10, 10, 9]])
kernel = np.ones((3,3))
kernel /= kernel.size
ss.convolve2d(data, kernel, mode='valid')
this gives
array([[ 10.22222222, 9.44444444],
[ 9.77777778, 9.33333333]])
Upvotes: 3
Reputation: 21831
Calculating the average in blocks is simply convolving the image with
a constant kernel.
You can use the scipy.signal.convolve2d
for this:
from scipy.signal import convolve2d
kernel = np.ones((3,3)) / 9.
out = convolve2d(img, kernel, mode='valid')
The mode='valid'
argument is required to only get the part of the result that you are interested in.
Upvotes: 5