Tyranitar
Tyranitar

Reputation: 37

Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (224, 224, 3)

I am trying to train a CNN using my own data for a binary classification problem. But I got a problem with the expected size as input which I thought is (224,224,3). I searched for this case and I found some people said that it can be fixed with reshaping the image size from (224,224,3) to (1,224,224,3) but it did not work.

here is my code:

import scipy.io
import tensorflow as tf
import cv2

# Parameters
img_height = 224
img_width = 224
img_depth = 3
classes = 2

# Load Data
db_name = 'polo'
db_path = 'D:/databases/' + db_name + '/'
db_data = scipy.io.loadmat(db_path + 'db_py.mat')
db_size = len(db_data['db']['images'][0][0][0])
faces_path = 'data/' + db_name + '/faces/'
images = []
labels = [0] * db_size
for i in range(0,db_size):
    filename = 'data/' + db_name + '/faces/' + db_data['db']['images'][0][0][0][i][2][0]
    image = cv2.imread(filename)
    image = cv2.resize(image, (img_height, img_width))
    images.append(image)
    labels[i] = db_data['db']['subjects'][0][0][0][i][4][0][0][0][0][0]

inputs = tf.keras.layers.Input(shape=(img_height,img_width,img_depth))
layers = tf.keras.layers.Conv2D(32, (3, 3), padding="same")(inputs)
layers = tf.keras.layers.Activation("relu")(layers)
layers = tf.keras.layers.BatchNormalization(axis=-1)(layers)
layers = tf.keras.layers.Conv2D(32, (3, 3), padding="same")(layers)
layers = tf.keras.layers.Activation("relu")(layers)
layers = tf.keras.layers.BatchNormalization(axis=-1)(layers)
layers = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(layers)
layers = tf.keras.layers.Dropout(0.25)(layers)
layers = tf.keras.layers.Conv2D(64, (3, 3), padding="same")(layers)
layers = tf.keras.layers.Activation("relu")(layers)
layers = tf.keras.layers.BatchNormalization(axis=-1)(layers)
layers = tf.keras.layers.Conv2D(64, (3, 3), padding="same")(layers)
layers = tf.keras.layers.Activation("relu")(layers)
layers = tf.keras.layers.BatchNormalization(axis=-1)(layers)
layers = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(layers)
layers = tf.keras.layers.Dropout(0.25)(layers)
layers = tf.keras.layers.Flatten()(layers)
layers = tf.keras.layers.Dense(512)(layers)
layers = tf.keras.layers.Activation("relu")(layers)
layers = tf.keras.layers.BatchNormalization()(layers)
layers = tf.keras.layers.Dropout(0.5)(layers)
layers = tf.keras.layers.Dense(classes)(layers)
layers = tf.keras.layers.Activation("softmax")(layers)

InitialLearnRate = 0.03
MaxEpochs = 30
MiniBatchSize = 32
opt = tf.keras.optimizers.SGD(lr=InitialLearnRate, decay=InitialLearnRate / MaxEpochs)
model = tf.keras.Model(inputs, layers , name="net")
model.compile(loss="categorical_crossentropy", optimizer=opt,
    metrics=["accuracy"])
model.summary()
H = model.fit(images, labels,
    batch_size=MiniBatchSize, epochs=MaxEpochs, verbose=1,steps_per_epoch=10)

Upvotes: 2

Views: 6499

Answers (1)

a-doering
a-doering

Reputation: 1179

If you go to the official documentation and search for the conv2d input shape you'll see:

4D tensor with shape: (batch, channels, rows, cols) if data_format is "channels_first" or 4D tensor with shape: (batch, rows, cols, channels) if data_format is "channels_last"

Alternatively, here's the detailed answer on input formatting.

If you have multiple images, you'd have an input of the size (batch_size, 244,244, 3) in your case. What I see you doing is that you create a list containing all these images. I would try:

images = np.empty(batch_size, 244, 244, 3)

for i in range(0,db_size):
    filename = ('data/'
                + db_name
                + '/faces/'
                + db_data['db']['images'][0][0][0][i][2][0])
    image = cv2.imread(filename)
    images[i]  = cv2.resize(image, (img_height, img_width))

If this is not helpful, the error message you receive could help other people to answer your question.

Upvotes: 3

Related Questions