Cuda and pytorch memory usage

I am using Cuda and Pytorch:1.4.0.

When I try to increase batch_size, I've got the following error:

CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 2.74 GiB already allocated; 7.80 MiB free; 2.96 GiB reserved in total by PyTorch)

I haven't found anything about Pytorch memory usage.

Also, I don't understand why I have only 7.80 mib available?

Should I just use a videocard with better perfomance, or can I free some memory? FYI, I have a GTX 1050 TI, python 3,7 and torch==1.4.0 and my os is Windows 10.

Upvotes: 5

Views: 11506

Answers (1)

Jeril
Jeril

Reputation: 8521

I had the same problem, the following worked for me:

torch.cuda.empty_cache()
# start training from here

Even after this if you get the error, then you should decrease the batch_size

Upvotes: 5

Related Questions