dermen
dermen

Reputation: 5362

Python code to quickly reduce the resolution of an image using numpy

Below is a code that reduces the resolution of a 2D numpy array (image) by binning small pixels into larger pixels. I am wondering if it can be made faster, or if there are alternatives that would be faster. Also, any suggestions in general are appreciated. For instance, if there is a code which is similar in speed, but which produces a smoother reduced image (for instance by using spline)

import numpy as np

def reduce_img ( img, bin_fac=1 ):
    assert( len( img.shape ) == 2 )
    imgX0 = img.shape[0] # slow axis
    imgX1 = img.shape[1] # fast axis

    r0 = imgX0 % bin_fac
    r1 = imgX1 % bin_fac

    img = np.copy( img) [:imgX0 - r0, :imgX1-r1]
    # bin the fast  axis 
    img_C = img.ravel(order='C')
    img = np.average( [ img_C[i::bin_fac] for i in xrange( bin_fac )   ],axis=0)
    img = img.reshape( (imgX0-r0, (imgX1-r1)/bin_fac ) , order='C')

    # bin the slow axis
    img_F = img.ravel(order='F')
    img = np.average( [ img_F[i::bin_fac] for i in xrange( bin_fac )   ],axis=0)
    img = img.reshape( ((imgX0-r0)/bin_fac, (imgX1-r1)/bin_fac ), order='F' )

    return img

Here is a result

>> from pylab import *
>> imshow( img ) 
>> show()

test image

>> img_r = reduce_img( img, bin_fac = 7 )
>> imshow( img_r )
>> show()

test image reduced

>> %timeit( reduce_img( img, bin_fac=7)  )
1000 loops, best of 3: 655 µs per loop

Upvotes: 2

Views: 4765

Answers (1)

Oliver W.
Oliver W.

Reputation: 13459

I'll start by first mentioning that your way of binning only seems rather unusual, which is what @ljetibo was referring to in the comments, I presume. I'll come back to this after the "optimization" talk.

First, you could improve your code slightly by getting rid of the superfluous call to np.copy, as you're only rebinding img to a view of the passed-in img. The ravel operation will return a copy then, unless the image shape was a multiple of the binning factor bin_fac.

Now, while the list comprehensions are fast, you're recreating a numpy array from a possibly non-contiguous list, which means you're copying memory from one place to another again. Those are all operations that eat away at the efficiency.

What you might be interested in is simply generating a highly memory-efficient view on the original image. That's where as_strided comes in:

from numpy.lib.stride_tricks import as_strided

def strided_rescale(g, bin_fac):
    strided = as_strided(g,
        shape=(g.shape[0]//bin_fac, g.shape[1]//bin_fac, bin_fac, bin_fac),
        strides=((g.strides[0]*bin_fac, g.strides[1]*bin_fac)+g.strides))
    return strided.mean(axis=-1).mean(axis=-1)  # order is NOT important! See notes..

Timing considerations show that this is usually a bit faster than the original method, showing improved performance as the binning factor increases:

In [263]: stp = 'from __main__ import img, strided_rescale, reduce_img'

In [264]: for n in range(1,21):
    a = timeit.timeit(stmt='strided_rescale(img, {})'.format(n), setup=stp, number=100)
    b = timeit.timeit(stmt='reduce_img(img, {})'.format(n), setup=stp, number=100)
    c = b*1./a
    d = np.ptp(strided_rescale(img, n) - reduce_img(img,n))
    print('{a:7f} | {b:7f} | {c:1.4f} | {d:1.4g}'.format(**locals()))
   .....:     
0.124911 | 0.277254 | 2.2196 | 0
0.303813 | 0.171833 | 0.5656 | 0
0.217745 | 0.188637 | 0.8663 | 0
0.162199 | 0.139770 | 0.8617 | 0
0.132355 | 0.138402 | 1.0457 | 0
0.121542 | 0.160275 | 1.3187 | 0
0.102930 | 0.162041 | 1.5743 | 0
0.090694 | 0.138881 | 1.5313 | 2.384e-07
0.097320 | 0.174690 | 1.7950 | 1.788e-07
0.082376 | 0.155261 | 1.8848 | 2.384e-07
0.084228 | 0.178397 | 2.1180 | 2.98e-07
0.070411 | 0.181175 | 2.5731 | 2.98e-07
0.075443 | 0.175605 | 2.3277 | 5.96e-08
0.068964 | 0.182602 | 2.6478 | 5.96e-08
0.067155 | 0.168532 | 2.5096 | 1.192e-07
0.056193 | 0.195684 | 3.4824 | 2.98e-07
0.063575 | 0.206987 | 3.2558 | 2.98e-07
0.078850 | 0.187697 | 2.3804 | 2.384e-07
0.053072 | 0.168763 | 3.1799 | 2.384e-07
0.047512 | 0.151598 | 3.1907 | 1.788e-07
# time a | time b   |  b/a   | peak-to-peak: check if both arrays are the same

I believe the observed minor differences in array equality are due to the copy operations where you're going from a numpy datatype back to a normal Python float and vice versa. I'm not 100% sure on this though.

Now that the optimization talk is over, let's go back to your binning method. With your current implementation, you have split up the image in square, non-overlapping zones. These submatrices needn't be square for the rest of this story, they could be rectangular (if the aspect ratio of the image can be altered) and the conclusions would still be valid. So, in each submatrix you are taking the row-wise mean, after which you take the mean of the resultant column vector. It can be easily shown mathematically that this is the same as taking the mean over the entire submatrix. That's good news, because it means in the strided_rescale function shown above, you could simply replace the return statement by:

return gstr.mean(axis=(-2,-1))

which will give you yet another (small) speed increase.

I thought the suggestion to use scipy.misc.imresize was a very good one, until I tried it on ndarrays with a dtype != np.uint8. And even then, the mode parameter has to be chosen correctly and it seems to be taking only the top-left corners of the sub-matrices:

In [39]: a = np.arange(16, dtype=np.uint8).reshape(4,4)

In [40]: a
Out[40]: 
array([[ 0,  1,  2,  3],
       [ 4,  5,  6,  7],
       [ 8,  9, 10, 11],
       [12, 13, 14, 15]], dtype=uint8)

In [41]: imresize(a, (2,2), mode='P')
Out[41]: 
array([[ 0,  2],
       [ 8, 10]], dtype=uint8)

In [42]: imresize(a, (2,2), mode='L')
Out[42]: 
array([[0, 1],
       [6, 7]], dtype=uint8)

Which is probably not what you were after. So stride_tricks works well for the actual binning. If you want smooth resizing behaviour (e.g. by using spline interpolation), you'll be looking towards the Python Image Library and all functions that use it under the hood or e.g. OpenCV, which also provides resize behaviour as summarized in this post.

Upvotes: 2

Related Questions