salt lake
salt lake

Reputation: 33

Pedicting from a conv2d model says the image must be 4d too

This is the model I've created:

model.add(Conv2D(64, (5,5), input_shape = (28, 28, 3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(64, (5, 5)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())
model.add(Dense(64))
# added layers
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(loss='sparse_categorical_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])
model.fit(X, y, batch_size=256, epochs=25, validation_split=0.3)

But loading an image for prediction as such:

test_image = np.array(img)
test_image = test_image.astype('float32')
test_image /= 255
# image.shape is 28, 28, 3
print((model.predict(test_image)))

results in the following error: ValueError: Input 0 of layer sequential_11 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: (None, 28, 3)

X.shape is (2163, 28, 28, 3) where 2163 is the number of pictures of 28x28 pixels.

Upvotes: 1

Views: 58

Answers (2)

Nicolas Gervais
Nicolas Gervais

Reputation: 36714

You need a batch dimension, because Keras is used to that input shape. I suggest you use np.expand_dims:

test_image = np.array(img).astype('float32')
test_image = np.expand_dims(test_image, axis=0)/255
test_image = tf.image.resize_with_pad(test_image, 28, 28)

print((model.predict(test_image)))

Upvotes: 0

abhishake
abhishake

Reputation: 141

The model expects an input with 4 dimensions. This means that you have to reshape your image with .reshape(n_images, 28, 28,3 ). Now you have added an extra dimension without changing the data . Basically, you need to reshape your data to (n_images, x_shape, y_shape, channels). Try it, Make sure the shape of your input layer is 28,28,3

Upvotes: 1

Related Questions