tykom
tykom

Reputation: 779

GPU memory when using hardware acceleration

I have repeatedly run into the problem on colab (esp. when using PyTorch) where an interrupted kernel that is using the .cuda() method will run out of memory when restarted.

A colleague suggested that the GPU memory is shared between different users on the colab platform. This seems like a really bad idea, but could be one answer to this problem. Can anyone confirm that hardware accelerators are dedicated to a particular user's session on colab?

Thanks

Upvotes: 2

Views: 688

Answers (1)

Ami F
Ami F

Reputation: 2282

Confirmed that each HW accelerator is assigned to a single user, and that each notebook gets its own hardware assignment. (the former has been true since launch, the latter has become true more recently).

The question uses the expression "user's session" but doesn't define it. For concreteness, the exclusivity described above applies to the (user,notebook) pair. (and doesn't, for example, include the browser tab).

Upvotes: 1

Related Questions