Glyph
Glyph

Reputation: 813

How to clear GPU memory after PyTorch model training without restarting kernel

I am training PyTorch deep learning models on a Jupyter-Lab notebook, using CUDA on a Tesla K80 GPU to train. While doing training iterations, the 12 GB of GPU memory are used. I finish training by saving the model checkpoint, but want to continue using the notebook for further analysis (analyze intermediate results, etc.).

However, these 12 GB continue being occupied (as seen from nvtop) after finishing training. I would like to free up this memory so that I can use it for other notebooks.

My solution so far is to restart this notebook's kernel, but that is not solving my issue because I can't continue using the same notebook and its respective output computed so far.

Upvotes: 59

Views: 121707

Answers (9)

yasin gourkani
yasin gourkani

Reputation: 1

It works for me. ** You should use torch.cuda.empty_cache() at the end of the train process. ** I've written a function that frees the GPU RAM. However, the function's content is irrelevant to clearing the GPU memory.

def empty_cuda_mem(model, data_loader, loss_fn):
    with torch.no_grad():
        x_batch, y_batch = next(iter(train_loader))
        yp = model(x_batch.to(device))
        loss = loss_fn(yp, y_batch.to(device))
        torch.cuda.empty_cache()

Upvotes: 0

KingRabbit
KingRabbit

Reputation: 11

I know the absolute solution

  1. move the model to cpu // model = model.cpu() something like that
  2. del model
  3. with torch.no_grad(): torch.cuda.empty_cache()
  4. import gc
  5. gc.collect()

I solved the oom error by following these steps without restarting the kernel

Upvotes: 1

Alex
Alex

Reputation: 101

If I remember correctly this helped me:

If I delete the model, I can reassign the GPU memory.

# model_1 training
del model_1
# model_2 training works

If I try to keep the model, the deep copy retains the connection to the GPU, and I cannot use assigned GPU memory.

import copy
# model_1 training
model_1_save = copy.deepcopy(model_1)
del model_1
# model_2 training memory error

If I want to use the first model later, and train a second model on a GPU :

# model_1 training
model_1.to("cpu")
# model_2 training works
model_2.to("cpu")
model_1.to("cuda")
# model_1 continuing training works

Upvotes: 0

MikeB2019x
MikeB2019x

Reputation: 1205

Apparently you can't clear the GPU memory via a command once the data has been sent to the device. The reference is here in the Pytorch github issues BUT the following seems to work for me.

Context: I have pytorch running in Jupyter Lab in a Docker container and accessing two GPU's [0,1]. Two notebooks are running. The first is on a long job while the second I use for small tests. When I started doing this, repeated tests seemed to progressively fill the GPU memory until it maxed out. I tried all the suggestions: del, gpu cache clear, etc. Nothing worked until the following.

To clear the second GPU I first installed numba ("pip install numba") and then the following code:

from numba import cuda
 
cuda.select_device(1) # choosing second GPU 
cuda.close()

Note that I don't actually use numba for anything except clearing the GPU memory. Also I have selected the second GPU because my first is being used by another notebook so you can put the index of whatever GPU is required. Finally, while this doesn't kill the kernel in a Jupyter session, it does kill the tf session so you can't use this intermittently during a run to free up memory.

Upvotes: 4

Maunish Dave
Maunish Dave

Reputation: 519

with torch.no_grad():
    torch.cuda.empty_cache()

Upvotes: 32

If you have a variable called model, you can try to free up the memory it is taking up on the GPU (assuming it is on the GPU) by first freeing references to the memory being used with del model and then calling torch.cuda.empty_cache().

Upvotes: 1

Karl
Karl

Reputation: 5473

The answers so far are correct for the Cuda side of things, but there's also an issue on the ipython side of things.

When you have an error in a notebook environment, the ipython shell stores the traceback of the exception so you can access the error state with %debug. The issue is that this requires holding all variables that caused the error to be held in memory, and they aren't reclaimed by methods like gc.collect(). Basically all your variables get stuck and the memory is leaked.

Usually, causing a new exception will free up the state of the old exception. So trying something like 1/0 may help. However things can get weird with Cuda variables and sometimes there's no way to clear your GPU memory without restarting the kernel.

For more detail see these references:

https://github.com/ipython/ipython/pull/11572

How to save traceback / sys.exc_info() values in a variable?

Upvotes: 40

prosti
prosti

Reputation: 46479

If you just set object that uses a lot of memory to None like this:

obj = None

And after that you call

gc.collect() # Python thing

This is how you may avoid restarting the notebook.


If you still would like to see it clear from Nvidea smi or nvtop you may run:

torch.cuda.empty_cache() # PyTorch thing

to empty the PyTorch cache.

Upvotes: 30

iScripters
iScripters

Reputation: 421

Never worked with PyTorch myself, but Google has several results which all basically say the same.. torch.cuda.empty_cache()

https://forums.fast.ai/t/clearing-gpu-memory-pytorch/14637

https://discuss.pytorch.org/t/how-can-we-release-gpu-memory-cache/14530

How to clear Cuda memory in PyTorch

Upvotes: 0

Related Questions