Zhiheng Yao
Zhiheng Yao

Reputation: 48

Large datasets and Cuda memory Issue

I was processing a large dataset and ran into this error: "RuntimeError: CUDA out of memory. Tried to allocate 1.35 GiB (GPU 0; 8.00 GiB total capacity; 3.45 GiB already allocated; 1.20 GiB free; 4.79 GiB reserved in total by PyTorch).

Any thought on how to solve this?

Upvotes: 0

Views: 1030

Answers (2)

Delton Myalil
Delton Myalil

Reputation: 345

If you are using full batch gradient descent (or similar), use mini batch instead with smaller batch size and reflect the same in dataloaders.

Upvotes: 1

YuFan Ma
YuFan Ma

Reputation: 46

I met the same problem before. It's not a bug, you just ran out of memory on your GPU.

  1. One way to solve it is to reduce the batch size until your code will run without this error.
  2. if it not works, better to understand your model. A single 8GiB GPU may not handle a large and deep model. You should consider changing a GPU with larger memory and find a lab to help you (Google Colab can help)
  3. if you are just doing evaluate, force a tensor to be run CPU would be fine
  4. Try model compression algorithm

Upvotes: 3

Related Questions