Jitesh Malipeddi
Jitesh Malipeddi

Reputation: 2385

PyTorch FasterRCNN TypeError: forward() takes 2 positional arguments but 3 were given

I am working on object detection and I have a dataset containing images and their corresponding bounding boxes (ground-truth values).

I actually have built my own feature extractor which takes an image as input and outputs a feature map(basically an encoder-decoder system where the final output of the decoder is the same as the image size and has 3 channels). Now, I want to feed this feature map as an input to a FasterRCNN model for detection instead of the original image. I am using the following code to add the feature map(using RTFNet to generate feature map - code at this link) on top the FRCNN detection module

frcnn_model = fasterrcnn_resnet50_fpn(pretrained=True)
in_features = frcnn_model.roi_heads.box_predictor.cls_score.in_features
frcnn_model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
fpn_block = frcnn_model.backbone.fpn
rpn_block = frcnn_model.rpn

backbone = RTFNet(num_classes) RTFNet is a feature extractor taking as input, an image having 4 channels(fused RGB and thermal image) , 
model = nn.Sequential(backbone, nn.ReLU(inplace=True))
model = nn.Sequential(model,fpn_block)
model = nn.Sequential(model,rpn_block)
model = nn.Sequential(model,FastRCNNPredictor(in_features, num_classes))

I am just trying to test and see if it is working by using the following code which generates random images and bounding boxes

images, boxes = torch.rand(1, 4, 512, 640), torch.rand(4, 11, 4)
labels = torch.randint(1, num_classes, (4, 11))
images = list(image for image in images)
targets = []
for i in range(len(images)):
  d = {}
  d['boxes'] = boxes[i]
  d['labels'] = labels[i]
  targets.append(d)
output = model(images, targets)

Running this gives me the following error

TypeError                                 Traceback (most recent call last)
<ipython-input-22-2637b8c27ad2> in <module>()
----> 1 output = model(images, targets)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    530             result = self._slow_forward(*input, **kwargs)
    531         else:
--> 532             result = self.forward(*input, **kwargs)
    533         for hook in self._forward_hooks.values():
    534             hook_result = hook(self, input, result)

TypeError: forward() takes 2 positional arguments but 3 were given

However, when I replace my model with a normal FasterRCNN Model with the following,

model = fasterrcnn_resnet50_fpn(pretrained=True)
in_features = model.roi_heads.box_predictor.cls_score.in_features
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)

there is no error and it works fine

Can anyone let me know where I am going wrong? Thanks in advance

Upvotes: 2

Views: 5494

Answers (1)

ccl
ccl

Reputation: 2378

This is because only the image inputs should be passed into the models, instead of both images and the ground truth targets. So instead of doing output = model(images, targets), you can do output = model(images).

As for why the error message talks about being given 3 positional arguments, this is because forward is initiated with a default self keyword, which represents the class instance. So in addition to self, you should only give 1 more argument, which would be the input image.

Upvotes: 3

Related Questions