mkocabas
mkocabas

Reputation: 733

Keras VGGnet Pretrained Model Variable Sized Input

I want to extract features of a 368x368 sized image with VGG pretrained model. According to documentation VGGnet accepts 224x224 sized images. Is there a way to give variable sized input to Keras VGG?

Here is my code:

# VGG Feature Extraction
x_train = np.random.randint(0, 255, (100, 224, 224, 3))
base_model = VGG19(weights='imagenet')
modelVGG = Model(inputs=base_model.input, outputs=base_model.get_layer('block4_conv2').output)
block4_conv2_features = modelVGG.predict(x_train)

Edited code (It works!)

# VGG Feature Extraction
x_train = np.random.randint(0, 255, (100, 368, 368, 3))
base_model = VGG19(weights='imagenet', include_top=False)
modelVGG = Model(inputs=base_model.input, outputs=base_model.get_layer('block4_conv2').output)
block4_conv2_features = modelVGG.predict(x_train)

Upvotes: 4

Views: 1184

Answers (1)

Fábio Perez
Fábio Perez

Reputation: 26048

The input size affects the number of neurons in the fully-connected (Dense) layers. So you need to create your own fully-connected layers.

Call VGG19 with include_top=False to remove the fully-connected layers and then add them yourself. Check this code for reference.

Upvotes: 5

Related Questions