Zhou XF
Zhou XF

Reputation: 193

out of memory when using cupy

When I was using cupy to deal with some big array, the out of memory errer comes out, but when I check the nvidia-smi to see the memeory usage, it didn't reach the limit of my GPU memory, I am using nvidia geforce RTX 2060, and the GPU memory is 6 GB, here is my code:

import cupy as cp

mempool = cp.get_default_memory_pool()
print(mempool.used_bytes())              # 0
print(mempool.total_bytes())             # 0

a = cp.random.randint(0, 256, (10980, 10980)).astype(cp.uint8)
a = a.ravel()
print(a.nbytes)                          # 120560400
print(mempool.used_bytes())              # 120560640
print(mempool.total_bytes())             # 602803712
# when I finish create this array, the nvidia-smi shows like this
#+-----------------------------------------------------------------------------+
 | NVIDIA-SMI 430.86       Driver Version: 430.86       CUDA Version: 10.2     |
 |-------------------------------+----------------------+----------------------+
 | GPU  Name            TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
 | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
 |===============================+======================+======================|
 |   0  GeForce RTX 2060   WDDM  | 00000000:01:00.0  On |                  N/A |
 | N/A   46C    P8     9W /  N/A |   1280MiB /  6144MiB |      1%      Default |
 +-------------------------------+----------------------+----------------------+

# but then I run this command, and error cames out
s_values, s_idx, s_counts = cp.unique(
    a, return_inverse=True, return_counts=True)
# and the error shows
# cupy.cuda.memory.OutOfMemoryError: out of memory to allocate 964483584 bytes (total 5545867264 bytes)
# the nvidia-smi shows
# +-----------------------------------------------------------------------------+
  | NVIDIA-SMI 430.86       Driver Version: 430.86       CUDA Version: 10.2     |
  |-------------------------------+----------------------+----------------------+
  | GPU  Name            TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
  | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
  |===============================+======================+======================|
  |   0  GeForce RTX 2060   WDDM  | 00000000:01:00.0  On |                  N/A |
  | N/A   45C    P8     9W /  N/A |   5075MiB /  6144MiB |      3%      Default |
  +-------------------------------+----------------------+----------------------+

there seems to have enough space to use, why this error happend, is this because of my GPU doesn't have enough memory,or is it because my code is wrong or I didn't allocate memory correcttly.

Upvotes: 1

Views: 6287

Answers (2)

Fu678
Fu678

Reputation: 324

you can use dask for doing the same as it does the parallelization on your behalf and you never really run out of memory even if the data doesn't fit the RAM. I am attaching the link where the author himself provides an explanation of how to do it.

from dask.distributed import  Client,LocalCluster
import dask.array as da
import numpy as np

cluster = LocalCluster() #using multiple CPUs in the machine/cluster
client = Client(cluster)
client
rs = da.random.RandomState(RandomState=np.random.RandomState)
x = rs.random((100000,40000),chunks=(10000,400)) #29.80GB of ndarray
x #just ensure that the chunk size is small #30.52MB chunk
da.exp(x).mean().compute() #do not try to return ndarray with element-wise transformation, instead always try to get the reduced form. 
da.exp(x) # Do not run this line as it will lead to  

In the last line dask tries to persist the output in the memory. Since the output is of the order 29+GBs you will run out of memory. Youtube link for the explanation of the above code by the author of dask

Upvotes: 0

ycx
ycx

Reputation: 3211

Isn't 964,483,584 bigger than your mempool.total_bytes() of 602,803,712?

As said in the comments, you can do it by batches instead of doing the whole computation at once.

Upvotes: 1

Related Questions