Nick Jones
Nick Jones

Reputation: 225

Improving memory efficiency of a vectorised function

I have nine large float arrays (3000 by 3000). The arrays are called g_pro_X_array etc.

The vectorised function checks through the cells in each array adding them together and once they exceed it returns a value from a lookup table of "grain sizes".

My trouble is that is this is a very memory intense operation (it uses nearly 1gb of ram) is there more memory efficient way to do this calculation?

Here is my code:

    grain_lookup = {"array_1": self.grain_size_1, "array_2": self.grain_size_2, "array_3": self.grain_size_3, "array_4": self.grain_size_4, "array_5": self.grain_size_5, "array_6": self.grain_size_6, "array_7": self.grain_size_7, "array_8": self.grain_size_8, "array_9": self.grain_size_9}

    # Create a function to look up the d50 grainsize
    def get_grain_50(a1, a2, a3, a4, a5, a6, a7, a8, a9):
        if a1 >= 0.5:
            return grain_lookup["array_1"]
        elif a1 + a2 >= 0.5:
            return grain_lookup["array_2"]
        elif a1 + a2 + a3 >= 0.5:
            return grain_lookup["array_3"]
        elif a1 + a2 + a3 + a4 >= 0.5:
            return grain_lookup["array_4"]
        elif a1 + a2 + a3 + a4 + a5 >= 0.5:
            return grain_lookup["array_5"]
        elif a1 + a2 + a3 + a4 + a5 + a6 >= 0.5:
            return grain_lookup["array_6"]
        elif a1 + a2 + a3 + a4 + a5 + a6 + a7 >= 0.5:
            return grain_lookup["array_7"]
        elif a1 + a2 + a3 + a4 + a5 + a6 + a7 + a8 >= 0.5:
            return grain_lookup["array_8"]
        elif a1 + a2 + a3 + a4 + a5 + a6 + a7 + a8 + a9 >= 0.5:
            return grain_lookup["array_9"]
        else:
            return -9999

    V_get_grain = np.vectorize(get_grain_50)

    d50 = np.empty_like(g_pro_1_array, dtype = float)

    d50 = V_get_grain(g_pro_1_array, g_pro_2_array, g_pro_3_array, g_pro_4_array, g_pro_5_array, g_pro_6_array, g_pro_7_array, g_pro_8_array, g_pro_9_array)

Upvotes: 0

Views: 74

Answers (1)

DrV
DrV

Reputation: 23490

There are certain efficiency vs. memory vs. readability trade-offs you need to do, and you do not really mention them. However, it is reasonable to split your algorithm into two:

  • find how many images have to be stacked before reaching the limit (0..9, where 9 means that the limit is not reached)

  • apply the look-up table

If you are afraid of using a lot of memory (cumsum uses roughly 640 MB), you can do the sum one image at a time:

import numpy as np

# the grain table must be an array, fill in the numbers you want
graintable = np.array([100,200,300,400,500,600,700,800,900,-9999])

def V_get_grain(*images):
    # create the cumulative sum buffer (empty at this point)
    csum = np.zeros_like(images[0])
    # create the counter for number of samples needed to reach .5
    cnt = np.zeros(images[0].shape, dtype='uint8')

    # iterate through the images:
    for img in images:
        # add the image into the cumulative sum buffer
        csum += img
        # add 1 to the counter if the sum of a pixel is < .5
        cnt += csum < .5

    # now cnt has a number for each pixel:
    # 0: the first image >= .5
    # ...
    # 9: all images together < .5

    return graintable[cnt]

This needs 4 or 8 bytes per pixel for the cumulative sum (depending on the type of floats you use) and 1 byte per pixel for the counter. This should also be relatively quick (my computer spent 368 ms for nine 3000x3000 images with 8-byte floats). The function can be called just as you call your function in the question.

Upvotes: 1

Related Questions