Reputation: 1424
I have to stack some my own layers on different kinds of pytorch models with different devices.
E.g. A is a cuda model and B is a cpu model (but I don't know it before I get the device type). Then the new models are C and D respectively, where
class NewModule(torch.nn.Module):
def __init__(self, base):
super(NewModule, self).__init__()
self.base = base
self.extra = my_layer() # e.g. torch.nn.Linear()
def forward(self,x):
y = self.base(x)
z = self.extra(y)
return z
...
C = NewModule(A) # cuda
D = NewModule(B) # cpu
However I must move base
and extra
to the same device, i.e. base
and extra
of C are cuda models and D's are cpu models. So I tried this __inin__
:
def __init__(self, base):
super(NewModule, self).__init__()
self.base = base
self.extra = my_layer().to(base.device)
Unfortunately, there's no attribute device
in torch.nn.Module
(raise AttributeError
).
What should I do to get the device type of base
? Or any other method to make base
and extra
to be on the same device automaticly even the structure of base
is unspecific?
Upvotes: 69
Views: 153564
Reputation: 2279
@Duane's answer creates a parameter in the model (despite being a small tensor).
I think this answer is slightly more pythonic and elegant:
class Model(nn.Module):
def __init__(self, *args, **kwargs):
super().__init__()
self.device = torch.device('cpu') # device parameter not defined by default for modules
def _apply(self, fn):
# https://stackoverflow.com/questions/54706146/moving-member-tensors-with-module-to-in-pytorch
# override apply by moving the attribute device of the class object as well.
# This allows to directly know where the class is when creating new attribute for the class object.
super()._apply(fn)
self.device = fn(self.device)
return self
net.cuda()
, net.float()
, etc will all work as well, since those all call _apply
rather than to
(as can be seen in the source).
An alternative solution from the comment of @Kani (accepted answer) is also very elegant:
class Model(nn.Module):
def __init__(self, *args, **kwargs):
"""
Constructor for Neural Network.
"""
super().__init__()
@property
def device(self):
return next(self.parameters()).device
You access the device through model.device
as for parameters. This solution does not work when you have no parameter inside the model.
Upvotes: 12
Reputation: 5140
My solution, works in 99% of cases.
class Net(nn.Module):
def __init__()
super().__init__()
self.dummy_param = nn.Parameter(torch.empty(0))
def forward(x):
device = self.dummy_param.device
... etc
Thereafter, the dummy_param will always have the same device as the module Net, so you can get it anytime you want. eg:
net = Net()
net.dummy_param.device
'cpu'
net = net.to('cuda')
net.dummy_param.device
'cuda:0'
Upvotes: 43
Reputation: 3727
This question has been asked many times (1, 2). Quoting the reply from a PyTorch developer:
That’s not possible. Modules can hold parameters of different types on different devices, and so it’s not always possible to unambiguously determine the device.
The recommended workflow (as described on PyTorch blog) is to create the device
object separately and use that everywhere. Copy-pasting the example from the blog here:
# at beginning of the script
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
...
# then whenever you get a new Tensor or Module
# this won't copy if they are already on the desired device
input = data.to(device)
model = MyModule(...).to(device)
Do note that there is nothing stopping you from adding a .device
property to the models.
As mentioned by Kani (in the comments), if the all the parameters in the model are on the same device, one could use next(model.parameters()).device
.
Upvotes: 91