Reputation: 3728
This question makes use of a pre-trained VGG network, whose summary shows an InputLayer
being used. I like the clarity of the explicit input layer... even if functionally it does nothing (true?). But when I try to mimic this with something like:
model = Sequential()
model.add(Input(shape=(128, 128, 3)))
model.add(Conv2D(32, (3, 3), activation='relu'))
the result displayed using print(model.summary())
is no different from:
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(128, 128, 3))
... and both show the first layer as being a Conv2D
layer. Where did my Input
layer go? And is it worth the hassle of getting it back?
Upvotes: 1
Views: 1820
Reputation: 12346
In your example you're using a Sequential, try using a keras.models.Model
.
inp = keras.layers.Input((128, 128, 3))
op = keras.layers.Conv2D(32, (3, 3), activation='relu')(inp)
model = keras.models.Model(inputs=[ inp ], outputs = [op] )
model.summary()
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 128, 128, 3)] 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 126, 126, 32) 896
=================================================================
Total params: 896
Trainable params: 896
Non-trainable params: 0
_________________________________________________________________
Upvotes: 2
Reputation: 15043
No, you can keep them separate, it does not make any difference.
As for the input_shape
, that argument can be specified for each and every layer, yet Keras is smart enough to deduce on its own the shape of the next layers so we do not mention it explicitly.
Upvotes: 0