fouronnes
fouronnes

Reputation: 4028

Good way to implement a normalize filter in numpy

I'm not so familiar with the memory model of Numpy arrays. Is there a more efficient way (or a 'better practice' way) of computing a normalized version of an image? That is, the image such that for each pixel r+g+b == 1.

Using a more matrix oriented approach perhaps? Does such a filter have a name?

Here is the code that I have so far (ignoring divide by zero errors):

def normalize(image):
    lines, columns, depth = image.shape
    normalized = np.zeros(image.shape)
    for i in range(lines):
            for j in range(columns):
                    normalized[i,j] = image[i,j] / float(np.sum(image[i,j]))
    return normalized

Where image is a np.array of depth 3.

Thanks

Upvotes: 2

Views: 1009

Answers (1)

JaminSore
JaminSore

Reputation: 3936

This would be done much more efficiently taking advantage of numpy's broadcasting rules

>>> import numpy as np
>>> image = np.random.random(size=(3,4,5))
>>> sums = image.sum(axis=2)
>>> sums.shape
... (3, 4)
>>> normalized  = image / sums[:, :, None] #same as image / sums[:, :, np.newaxis]
>>> normalized.sum(axis=2)
... array([[ 1.,  1.,  1.,  1.],
...        [ 1.,  1.,  1.,  1.],
...        [ 1.,  1.,  1.,  1.]])

If you're worried about memory, and don't need the original image you can normalize it in-place

>>> image /= image.sum(axis=2)[:, :, None]

As a function:

def normalize(image, inplace=False):
    if inplace:
        image /= image.sum(axis=2)[:, :, None]
    else:
        return image / image.sum(axis=2)[:, :, None]

Upvotes: 4

Related Questions