user11173832
user11173832

Reputation:

free up the memory allocation cuda pytorch?

RuntimeError: CUDA out of memory. Tried to allocate 12.00 MiB (GPU 1; 11.91 GiB total capacity; 10.12 GiB already allocated; 21.75 MiB free; 56.79 MiB cached)

I encountered the preceding error during pytorch training.
I'm using pytorch on jupyter notebook. Is there a way to free up the gpu memory in jupyter notebook?

Upvotes: 2

Views: 2599

Answers (2)

Rajesh Kontham
Rajesh Kontham

Reputation: 349

I had the same issue sometime back. There are generally two way I go about.

  1. Decrease the batch size

Sometimes, even when I had decrease the batch size to '1', this issue persists. Then I changed my approach as follows.

  1. Decrease the image size ( or patch size, depending upon your implementation). Decreasing the image size, also gives in space for you to increase your batch size.

But second approach is not recommended because we want the network to learn different features of image in relation to each other. Decreasing the image size decreases the scope of network learning finer details. ( Depending upon on your need you would need to alter it).

Upvotes: 2

sailfish009
sailfish009

Reputation: 2927

adjust batch_size or

https://pytorch.org/docs/stable/notes/faq.html

total_loss = 0
for i in range(10000):
    optimizer.zero_grad()
    output = model(input)
    loss = criterion(output)
    loss.backward()
    optimizer.step()
    total_loss += loss

Upvotes: 0

Related Questions