Reputation: 21
So, I have been working on neural style transfer in Pytorch, but I'm stuck at the point where we have to run the input image through limited number of layers and minimize the style loss. Long story short, I want to find a way in Pytorch to evaluate the input at different layers of the architecture(I'm using vgg16). I have seen this problem solved very simply in keras, but I wanted to see if there is a similar way in pytorch as well or not.
from keras.applications.vgg16 import VGG16
model = VGG16()
model = Model(inputs=model.inputs, outputs=model.layers[1].output)
Upvotes: 2
Views: 2033
Reputation: 24894
Of course you can do that:
import torch
import torchvision
pretrained = torchvision.models.vgg16(pretrained=True)
features = pretrained.features
# First 4 layers
model = torch.nn.Sequential(*[features[i] for i in range(4)])
You can always print
your model and see how it's structured. If it is torch.nn.Sequential
(or part of it is, as above), you can always use this approach.
Upvotes: 3
Reputation: 352
Please have a look at the following threads:
https://discuss.pytorch.org/t/how-can-l-load-my-best-model-as-a-feature-extractor-evaluator/17254/6
As described there as well, you can modify the forward method to return whichever layers whose outputs you'd like to obtain, or you can also create a hook for those layers.
Upvotes: 2