spiridon_the_sun_rotator
spiridon_the_sun_rotator

Reputation: 1044

Is there a way to figure out whether PyTorch model is on cpu or on the device?

I would like to figure out, whether the PyTorch model is on cpu or cuda in order to initialize some other variable as Torch.Tensor or Torch.cuda.Tensor depending on the model.

However, looking at the output of the dir() function I see only .cpu(), .cuda(), to() methods which put the model on device, GPU or other device, specified in to. For PyTorch tensor there is is_cuda attribute, but no analogue for the whole model.

Is there some way to deduce this for a model, or one needs to refer to a particular weight?

Upvotes: 3

Views: 5318

Answers (1)

Ivan
Ivan

Reputation: 40648

No, there is no such function for nn.Module, I believe this is because parameters could be on multiple devices at the same time.

If you're working with a single device, a workaround is to check the first parameter:

next(model.parameters()).is_cuda

As described here.

Upvotes: 6

Related Questions