vladjamir
vladjamir

Reputation: 124

tcmalloc: large alloc ... killed in Google Colab

I was trying to setup MuseGAN in Google Colab. I already downloaded the data and I am now processing the data by storing it into the shared memory using the SharedArray package, by running the script ./scripts/process_data.sh. I encountered this error

> Loading data from '/content/musegan/scripts/../data/train_x_lpd_5_phr.npz'.
Saving data to shared memory.
tcmalloc: large alloc 6245990400 bytes == 0x26b6000 @  0x7f97d2bea1e7 0x7f97d08e0a41 0x7f97d0943bb3 0x7f97d08e4937 0x5553b5 0x5a730c 0x503073 0x507641 0x504c28 0x502540 0x502f3d 0x507641 0x501945 0x591461 0x59ebbe 0x545068 0x506b39 0x502209 0x502f3d 0x506859 0x504c28 0x506393 0x634d52 0x634e0a 0x6385c8 0x63915a 0x4a6f10 0x7f97d27e7b97 0x5afa0a
./scripts/process_data.sh: line 5:   360 Killed                  python "$DIR/../src/process_data.py" "$DIR/../data/train_x_lpd_5_phr.npz"

Can someone explain this? I don't get why I encountered this. I encountered this at first when I run it on a machine wihout a gpu (i.e. cpu only) then I heard about Google Colab.

Upvotes: 4

Views: 11492

Answers (3)

oski86
oski86

Reputation: 865

This is a problem about how Google deals with increasing memory usage and thinks an OOM will occur even though it won't.

See https://github.com/huggingface/transformers/issues/4668

Upvotes: 5

mihawk26
mihawk26

Reputation: 53

I don't know the solution to run from python files but You can just copy the contents of the file(e.g.,train.py) which is loading the data into ram directly to the cell of Google colab and run it, it gives no error in case you have sufficient ram.Though I don't know the cause fro such behaviour

Upvotes: 0

Micheal Abaho
Micheal Abaho

Reputation: 1

I was facing the same issue in colab and simply refreshed and didn't see it anymore.

Upvotes: -4

Related Questions