Reputation: 794
here is my model:
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(40, 40, 3)),
tf.keras.layers.Dense(150, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
and my input tensor is called image_batch
. when I run np.shape(image_batch[0])
, the result is TensorShape([40, 40, 3])
. This is expected behavior, as each training example is 40x40x3 its an rgb image).
However when running the command predictions = model(image_batch[0]).numpy()
in order to get the predictions of the model, I get the error:
WARNING:tensorflow:Model was constructed with shape Tensor("flatten_1_input:0", shape=(None, 40, 40, 3), dtype=float32) for input (None, 40, 40, 3), but it was re-called on a Tensor with incompatible shape (40, 40, 3).
So my question is why does the keras model expect a shape with an extra "None" dimension, and how do I provide it to the model?
Upvotes: 0
Views: 3484
Reputation: 1387
Answers here are correct, but just to fill in the missing piece - the method for prepping a tensor for prediction is np.expand_dims()
.
Let's say our model looks like this
Model: "model_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_5 (InputLayer) [(None, 224, 224, 3)] 0
_________________________________________________________________
sequential (Sequential) (None, 224, 224, 3) 0
_________________________________________________________________
vggface_resnet50 (Functional (None, 1, 1, 2048) 23561152
_________________________________________________________________
flatten (Flatten) (None, 2048) 0
_________________________________________________________________
classifier (Dense) (None, 8) 16392
=================================================================
Total params: 23,577,544
Trainable params: 16,392
Non-trainable params: 23,561,152
_________________________________________________________________
And let's say this is an image classifier, so we need to ensure that the image is 224x224x3, AND we need to add this 'batch' dimension. (let's say we've loaded our image with PIL.Image.open()
)
We can prep images with:
def prepPred(someImage, resizeTarget=224):
"""Prepare Image for prediction"""
someImage = someImage.resize((resizeTarget,
resizeTarget),
pillowImageLoader.BILINEAR)
# Convert image to array
imageArray = np.array(someImage)
# add the batch dimension and return
return np.expand_dims(imageArray, axis=0)
Models always expect batches - when you train, you provide batches, and when you predict, you have to provide batches - in this case, it's just a 'batch' of 1.
Upvotes: 0
Reputation: 94
The None
is the batch dimension. Its is set as None because it can vary. You can use batch sized of 512 during training and then use batches of size 1 when predicting, for example.
Upvotes: 2
Reputation: 19250
The None
dimension is the batch dimension. In other words, the input should have the shape (batch_size, height, width, num_channels)
.
If you want to predict on one input, change model(image_batch[0]).numpy()
to model(image_batch[0:1]).numpy()
. This will maintain the first dimension. The shape will be (1, 40, 40, 3)
in this case.
Upvotes: 3