Javed Akhtar
Javed Akhtar

Reputation: 55

How to allocate more memory to pytorch

i keep getting Cuda out of memory errors, i have a 3090 with 24gb of vram, torch only allocates 7gb, 15gb is always free.

RuntimeError: CUDA out of memory. Tried to allocate 92.00 MiB (GPU 0; 24.00 GiB total capacity; 6.90 GiB already allocated; 14.90 GiB free; 6.98 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

i also checked nvidia smi if any other process was taking up memory, found none

enter image description here

this was peak mem usage before it gave me the error

any help?

Upvotes: 2

Views: 9201

Answers (1)

simeonovich
simeonovich

Reputation: 901

Looks like something is stopping torch from accessing more than 7GB of memory on your card. Try running torch.cuda.empty_cache() in the beginning of your script, this will release all memory that can be safely freed.

If that doesn't work, try killing as many of the processes listed using the GPU as possible - and maybe restarting your machine. Even if the GPU Memory Usage column shows N/A for a process, it might be reserving some amount of memory. If many processes are reserving a small amount of memory across a wide range of blocks, it could stop torch from utilizing much of the available resources.

Upvotes: 0

Related Questions