auzn
auzn

Reputation: 695

numpy.memmap returns not enough memory while there are plenty available

During a typical call to numpy.memmap() on a 64bit windows machine, python raise the following error:

OSError: [WinError 8] Not enough memory resources are available to process this command

A different windows machine raise the same error with a different text:

OSError: [WinError 8] Not enough storage is available to process this command.

Here is the code abstract:

with open(infile, 'rb') as f:
  ......
  array = numpy.memmap(f, dtype='uint8', mode='r', offset=offset, shape=arraysize).tolist()

Python only used 50MB of the memory by this time. What would be the cause of running out of memory?

Upvotes: 4

Views: 2919

Answers (3)

Illia  Korzun
Illia Korzun

Reputation: 1

I ran into same error while trying to map offset + length bytes in a file when its size was size < offset + length.

The problem was in the access mode. If you want to map a file segment to read its contents so the access is ACCESS_READ you should choose the range within the file size or you are going to get an error. But if you are going to write something to the space outside of the file, you should set access flag to ACCESS_WRITE.

import mmap
import io

FILE_NAME = 'file.dat'

with io.open(FILE_NAME, "wb+") as f:
    # set file size
    f.truncate(50)
    # map space within the file
    # >> Success, the file size is same
    mmap.mmap(f.fileno(), length=25, access=mmap.ACCESS_READ)
    # map space out of the file with READ flag
    # >> OSError: [WinError 8] Not enough memory resources are available to process this command
    mmap.mmap(f.fileno(),length=100,access=mmap.ACCESS_READ)
    # map space out of the file with WRITE flag
    # >> Success, the file size was changed from 50 to 100 bytes
    mmap.mmap(f.fileno(),length=100,access=mmap.ACCESS_WRITE)

Upvotes: 0

catubc
catubc

Reputation: 508

For those who got here and wasted a lot of time trying to understand this error in Windows, here's the correct error report in Linux:

% complete:   0%|                                     | 0/20000 [00:00<?, ?it/s]  setting up memory map: shape:  (20000, 512, 512)
Traceback (most recent call last):
  File "bmi_command_line.py", line 146, in <module>
    bmi.run_BMI()
  File "/home/cat/code/bmi/bmi/bmi.py", line 585, in run_BMI
    self.bmi_update()
  File "/home/cat/code/bmi/bmi/bmi.py", line 661, in bmi_update
    self.compute_frame_number()
  File "/home/cat/code/bmi/bmi/bmi.py", line 790, in compute_frame_number
    self.newfp = np.memmap(self.fname_fluorescence,
  File "/home/cat/anaconda3/envs/bmi/lib/python3.8/site-packages/numpy/core/memmap.py", line 267, in __new__
    mm = mmap.mmap(fid.fileno(), bytes, access=acc, offset=start)
ValueError: mmap length is greater than file size

Upvotes: 2

auzn
auzn

Reputation: 695

It turns out the issue here is that the offset + shape in the memmap call is greater than the total size of the file (i.e. I am trying to read beyond the size of the file).

The error message about memory resource is a little misleading in this case.

Upvotes: 7

Related Questions