Sean
Sean

Reputation: 3385

How to free GPU memory for a specific tensor in PyTorch?

I’m currently running a deep learning program using PyTorch and wanted to free the GPU memory for a specific tensor.

I’ve thought of methods like del and torch.cuda.empty_cache(), but del doesn’t seem to work properly (I’m not even sure if it frees memory at all) and torch.cuda.empty_cache() seems to free all unused memory, but I want to free memory for just a specific tensor.

Is there any functionality in PyTorch that provides this?

Thanks in advance.

Upvotes: 2

Views: 3760

Answers (3)

ntd
ntd

Reputation: 2372

PyTorch will store the tensors in the computation graph (if it was initialized with requires_grad = True) in case you want to perform automatic differentiation later on. If you don't want to use a specific tensor any longer for gradient computation, you can use the detach method to tell PyTorch that it doesn't need to store the values of that tensor anymore for gradient computation. This will help free up some memory (only removing that specific tensor and not deleting the entire computation graph).

eg - my_tensor.detach()

Upvotes: 1

prosti
prosti

Reputation: 46341

Both obj = None or del obj are similar, except the del will remove the reference.

However, you need to call gc.collect() to free Python memory without restarting the notebook.

If you would like to clear the obj from PyTorch cache also run:

torch.cuda.empty_cache()

After the last command Nvidea smi or nvtop will notice your did.

Upvotes: 0

Toyo
Toyo

Reputation: 751

del operator works but you won't see a decrease in the GPU memory used as the memory is not returned to the cuda device. It is an optimization technique and from the user's perspective, the memory has been "freed". That is, the memory is available for making new tensors now.


Source: Pytorch forum

Upvotes: 2

Related Questions