MUAS
MUAS

Reputation: 626

PyTorch running out of memory: DefaultCPUAllocator can't allocate memory

I'm trying to optimize some weighs (weigts) in Pytorch but I keep getting this error:

RuntimeError: [enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 8000000000000 bytes. Error code 12 (Cannot allocate memory).

Namely, things blow up when I run (weights * col).sum() / weights.sum(). Weights is a tensor of size (1000000,1) and col is also a tensor of size (1000000, 1). Both tensors are decently sized, but it seems odd that I'm using up all the memory in my computer (8GB) for these operations.

Upvotes: 11

Views: 27003

Answers (1)

Dallin Clayton
Dallin Clayton

Reputation: 124

It could be that your weights and col tensors are not aligned (i.e. one of them is transposed so it is (1,1000000) instead of (1000000,1). Then when you do (weights * col) the shapes are broadcast together and it makes a tensor that is (1000000,1000000) which is probably where you are getting the extreme memory usage (as the resulting tensor is 1000000 times bigger than your original tensor).

Upvotes: 10

Related Questions