Mohit Lamba
Mohit Lamba

Reputation: 1373

Send all parameters and objects of a class to same device in PyTorch

I have the following dummy code in PyTorch

class inside(nn.Module):
    def __init__(self):
        super(inside, self).__init__()        
        self.weight_h = nn.Parameter(SOMETHING GOES HERE) # send to CPU or GPU
        self.weight_v = nn.Conv2d(SOMETHING GOES HERE) # send to CPU or GPU

    def forward(self, x):
        ...
        return x

class main_class(nn.Module):
    def __init__(self):
        super(main_class, self).__init__()        
        self.paramA = nn.Conv2d(...)
        self.paramB = nn.Conv2d(...)
        self.in_class = inside()

    def forward(self, x):
        ...
        return x

device = #Define what GPU device to use or CPU

object = main_class()
object = object.to(device)

Suppose in this code the device is GPU2. Then I know that the parameters self.paramA and self.paramB have definitely been loaded to GPU2 and not on CPU or any other GPU. But what can be said of self.weight_h and self.weight_v? Are they guaranteed to be on GPU2 or do I need to explicitly state this for parameters of inside class?

I am using PyTorch 1.8.1 but perhaps suggest a method which is quite general and which will be true for any PyTorch version>=1.0

Upvotes: 0

Views: 1203

Answers (1)

user2736738
user2736738

Reputation: 30926

I guess when you said this code - the term can be clarified a bit more. There are two things that can be put in GPU. One of the thing is regarding the data. You can keep your data in GPU and things like that.

There is another part to it, the model can be transferred to GPU. In this case, when you do final_model.to(...) then all the modules inside of it as part of the final model would be transferred to GPU.

I differentiated this two because sometimes it is easy to mess these two things up.

So the final answer is, yes they are guaranteed to be on GPU. (Those inside model weights which are part of the large model).

Upvotes: 1

Related Questions