Reputation: 48
I was processing a large dataset and ran into this error: "RuntimeError: CUDA out of memory. Tried to allocate 1.35 GiB (GPU 0; 8.00 GiB total capacity; 3.45 GiB already allocated; 1.20 GiB free; 4.79 GiB reserved in total by PyTorch).
Any thought on how to solve this?
Upvotes: 0
Views: 1030
Reputation: 345
If you are using full batch gradient descent (or similar), use mini batch instead with smaller batch size and reflect the same in dataloaders.
Upvotes: 1
Reputation: 46
I met the same problem before. It's not a bug, you just ran out of memory on your GPU.
Upvotes: 3