Reputation: 1837
I am loading many images into memory because I need to iterate over them very often to perform random data augmentation when training a neural network. My machine has 64GB of memory and more than 60GB are available. The machine runs a 64bit Linux and Python 3.7.4.
My script runs until the process exceeds 16GB. Then I see this error message:
cv2.error: OpenCV(3.4.2)/tmp/build/80754af9/opencv-suite_1535558553474/work/modules/core/src/alloc.cpp:55: error: (-4:Insufficient memory) Failed to allocate 18874368 bytes [this are 18MB] in function 'OutOfMemoryError'
Is there an internal memory limit for cv2 and/or python?
I also tried the following with numpy:
a = np.zeros((16*1024*1024*1024,), dtype=np.uint8)+1
(works and allocates 16GB)
a = np.zeros((17*1024*1024*1024,), dtype=np.uint8)+1
(crashes)
So I think it is a python or numpy issue as cv2 uses numpy internally.
Interestingly I am able to allocate >16GB using pytorch:
a = torch.ones((28*1024*1024*1024,), dtype=torch.uint8)
(works, but fails when I try more than 28GB)
Forgot to mention that I am running everything inside a SLURM instance. But I don't know how to find out if that is the issue because I have no other machine with that amount of memory.
EDIT: Before loading each image I print the memory information using psutil. This is right before it crashes:
svmem(total=134773501952, available=116365168640, percent=13.7, used=17686675456, free=112370987008, active=18417344512, inactive=2524413952, buffers=176410624, cached=4539428864, shared=87986176, slab=371335168)
Upvotes: 1
Views: 204
Reputation: 1837
The issue was not related to Python and/or OpenCV. My ulimit -v
setting was too low. Running ulimit -v unlimited
solves the problem.
Upvotes: 1