Reputation: 83
I am receving an error when executing the code below:
import torch
import torch.nn as nn
import torchvision.transforms as transforms
from torchvision.models.detection import fasterrcnn_resnet50_fpn
import PIL.Image as Image
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model = fasterrcnn_resnet50_fpn(pretrained=True)
model.roi_heads = nn.Sequential()
model.to(device)
img = Image.open('frame_00001.jpg')
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])])
img = transform(img).unsqueeze(0).to(device)
model.eval()
output = model(img)
The last line causes a "TypeError: forward() takes 2 positional arguments but 5 were given." The size of img is [1, 3, 960, 1280]. If I add square brackets around img before passing into the model (output = model([img])), I get a "ValueError: images is expected to be a list of 3d tensors of shape [C, H, W], got torch.Size([1, 3, 960, 1280])." Then, if I use "img.view(3, 960, 1280)," I get the original error again.
What is the solution to this problem? Thank you.
Upvotes: 0
Views: 765
Reputation: 36
I'm not sure what you're trying to do with the line:
model.roi_heads = nn.Sequential()
This line is causing the problem, I ran you're code excerpt without this line and it worked as expected (minus a few warnings about deprecation coming from the model being used, but the output looks reasonable)
Upvotes: 2