Reputation: 133
Can someone kindly help me trace the root of the following error? I don't understand where the switching between GPU and CPU is taking place as from the beginning I have instructed collab to use GPU.
Also following the error stack trace, it points to labels, what could be potentially wrong here?
Thanks in advance!
Upvotes: 0
Views: 3512
Reputation: 714
I would recommend to take a look at this youtube series to understand how pytorch works.
For your issue specifically I think you'll find your answer in this video
The idea is that you need to specify that you want to place your data and your model on your GPU. Using the method .to(device)
, device being either cuda if your GPU is available otherwise your cpu, and you need to do the same for your data.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
You also need to do it your data, I assume you have a for loop to iterate over your batch so you could do it like:
for batch in train_loader:
***
x, y = batch[0].to(device), batch[1].to(device)
***
Upvotes: 2