sophros
sophros

Reputation: 16620

numpy.std on memmapped ndarray fails with MemoryError

I have a huge (30GB) ndarray memory-mapped:

arr = numpy.memmap(afile, dtype=numpy.float32, mode="w+", shape=(n, n,))

After filling it in with some values (which goes very fine - max memory usage is below 1GB) I want to calculate standard deviation:

print('stdev: {0:4.4f}\n'.format(numpy.std(arr)))

This line fails miserably with MemoryError.

I am not sure why this fails. I would be grateful for tips how to calculate these in a memory-efficient way?

Environment: venv + Python3.6.2 + NumPy 1.13.1

Upvotes: 2

Views: 1003

Answers (2)

sophros
sophros

Reputation: 16620

import math
BLOCKSIZE = 1024**2
# For numerical stability. The closer this is to mean(arr), the better.
PIVOT = arr[0]


n = len(arr)
sum_ = 0.
sum_sq = 0.
for block_start in xrange(0, n, BLOCKSIZE):
    block_data = arr[block_start:block_start + BLOCKSIZE]
    block_data -= PIVOT
    sum_ += math.fsum(block_data)
    sum_sq += math.fsum(block_data**2)

stdev = np.sqrt(sum_sq / n - (sum_ / n)**2)

Upvotes: 1

kiyo
kiyo

Reputation: 1999

Indeed numpy's implementation of std and mean make full copies of the array, and are horribly memory inefficient. Here is a better implementation:

# Memory overhead is BLOCKSIZE * itemsize. Should be at least ~1MB 
# for efficient HDD access.
BLOCKSIZE = 1024**2
# For numerical stability. The closer this is to mean(arr), the better.
PIVOT = arr[0]

n = len(arr)
sum_ = 0.
sum_sq = 0.
for block_start in xrange(0, n, BLOCKSIZE):
     block_data = arr[block_start:block_start + BLOCKSIZE]
     block_data -= PIVOT
     sum_ += np.sum(block_data)
     sum_sq += np.sum(block_data**2)
stdev = np.sqrt(sum_sq / n - (sum_ / n)**2)

Upvotes: 1

Related Questions