jeffhale
jeffhale

Reputation: 4032

Set higher shared memory to avoid RuntimeError with PyTorch on Google Colab

Using pytorch 1.0 Preview with fastai v1.0 in Colab.

I often get RuntimeError: DataLoader worker (pid 13) is killed by signal: Bus error. for more memory intensive tasks (nothing huge).

Looks like a shared memory issue: https://github.com/pytorch/pytorch/issues/5040#issue-294274594

Fix looks like it is to change shared memory in the docker container:

https://github.com/pytorch/pytorch/issues/2244#issuecomment-318864552

Looks like the shared memory of the docker container wasn't set high enough. Setting a higher amount by adding --shm-size 8G to the docker run command seems to be the trick as mentioned here.

How can I increase the shared memory of the docker container running in Colab or otherwise avoid this error?

Upvotes: 0

Views: 2900

Answers (1)

Ami F
Ami F

Reputation: 2282

It's not possible to modify this setting in colab, but the default was raised to fix this issue already so you should not need to change the setting further: https://github.com/googlecolab/colabtools/issues/329

Upvotes: 1

Related Questions