Sam B.
Sam B.

Reputation: 3033

pytorch vgg model test on one image

I've trained a vgg model, this is how I transformed the test data

test_transform_2= transforms.Compose([transforms.RandomResizedCrop(224), 
                                     transforms.ToTensor()])

test_data = datasets.ImageFolder(test_dir, transform=test_transform_2)

the model's finished training now I want to test it on a single image

from scipy import misc

test_image = misc.imread('flower_data/valid/1/image_06739.jpg')
vgg16(torch.from_numpy(test_image))

Error

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-60-b83587325fea> in <module>
----> 1 vgg16(torch.from_numpy(test_image))

c:\users\sam\mydocu~1\code\envs\data-science\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    475             result = self._slow_forward(*input, **kwargs)
    476         else:
--> 477             result = self.forward(*input, **kwargs)
    478         for hook in self._forward_hooks.values():
    479             hook_result = hook(self, input, result)

c:\users\sam\mydocu~1\code\envs\data-science\lib\site-packages\torchvision\models\vgg.py in forward(self, x)
     40 
     41     def forward(self, x):
---> 42         x = self.features(x)
     43         x = x.view(x.size(0), -1)
     44         x = self.classifier(x)

c:\users\sam\mydocu~1\code\envs\data-science\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    475             result = self._slow_forward(*input, **kwargs)
    476         else:
--> 477             result = self.forward(*input, **kwargs)
    478         for hook in self._forward_hooks.values():
    479             hook_result = hook(self, input, result)

c:\users\sam\mydocu~1\code\envs\data-science\lib\site-packages\torch\nn\modules\container.py in forward(self, input)
     89     def forward(self, input):
     90         for module in self._modules.values():
---> 91             input = module(input)
     92         return input
     93 

c:\users\sam\mydocu~1\code\envs\data-science\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    475             result = self._slow_forward(*input, **kwargs)
    476         else:
--> 477             result = self.forward(*input, **kwargs)
    478         for hook in self._forward_hooks.values():
    479             hook_result = hook(self, input, result)

c:\users\sam\mydocu~1\code\envs\data-science\lib\site-packages\torch\nn\modules\conv.py in forward(self, input)
    299     def forward(self, input):
    300         return F.conv2d(input, self.weight, self.bias, self.stride,
--> 301                         self.padding, self.dilation, self.groups)
    302 
    303 

RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 3, 3], but got input of size [628, 500, 3] instead

I can tell I need to shape the input, however I don't know how to based on the way it seems to expect the input to be inform of a batch.

Upvotes: 0

Views: 2211

Answers (1)

Jatentaki
Jatentaki

Reputation: 13103

Your image is [h, w, 3] where 3 means the rgb channel, and pytorch expects [b, 3, h, w] where b is batch size. So you can reshape it by calling do that by calling reshaped = img.permute(2, 0, 1).unsqueeze(0). I think there is also a utility function for that somewhere, but I can't find it right now.

So in your case

tensor = torch.from_numpy(test_image)
reshaped = tensor.permute(2, 0 1).unsqueeze(0)
your_result = vgg16(reshaped)

Upvotes: 3

Related Questions