Reputation: 119
I am using Colab and Pytorch CUDA for my deep learning project and faced the problem of not being able to free up the GPU. I have read some related posts here but they did not work with my problem. Please guide me on how to free up the GPU memory. Thank you in advance
Upvotes: 2
Views: 737
Reputation: 1205
This code can do that. The GPU's are indexed [0,1,...] so if you only have one then the gpu_index is 0. Note that this clears the GPU by killing the underlying tensorflow session ie. your model, data, etc get cleared from the GPU but it won't reset the kernel on you Colab/Jupyter session. Because it clears the session you can't use this during a run to clear memory as you go.
from numba import cuda
def clear_GPU(gpu_index):
cuda.select_device(gpu_index)
cuda.close()
Install numba ("pip install numba") ... last time I tried conda gave me issues so use pip. This is a convenience b/c numba devs have taken the trouble to properly execute some low-level CUDA methods, so I suppose you could do the same if you have the time.
Upvotes: 1
Reputation: 8991
Try this:
torch.cuda.empty_cache()
or this:
with torch.no_grad():
torch.cuda.empty_cache()
Upvotes: 2