Farukh Khan
Farukh Khan

Reputation: 305

Building Inception Architecture for Image Classification. Keras. ValueError

I am trying to build a googleNet Inception architecture for Image classification. I have already read and save my image data and labels which are given below.

print(X_train.shape)
(16016, 224, 224, 3)
print(X_test.shape)
(16016, 1, 163)
print(y_train.shape)
(14939, 224, 224, 3)
print(y_test.shape)
(14939, 1, 163)

With this data I am trying to train my classifier. My Code for it is below.

IMG_SIZE = 224
input_image = Input(shape = (IMG_SIZE,IMG_SIZE,3))

tower_1 = Conv2D(64,(1,1),padding='same', activation='relu') (input_image)
tower_1 = Conv2D(64,(3,3), padding='same',activation='relu') (tower_1)

tower_2 = Conv2D(64,(1,1), padding='same',activation='relu')(input_image)
tower_2 = Conv2D(64,(5,5), padding='same', activation='relu')(tower_2)

tower_3 = MaxPooling2D((3,3),strides=(1,1),padding='same')(input_image)
tower_3 = Conv2D(64,(1,1), padding='same',activation='relu')(tower_3)

output = keras.layers.concatenate([tower_1,tower_2,tower_3],axis=3)
output = Flatten()(output)
out = Dense(163, activation='softmax')(output)

model = Model(inputs = input_image, outputs = out)
print(model.summary())

epochs = 30
lrate = 0.01
decay = lrate/epochs

sgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov= False)
model.compile(loss='categorical_crossentropy',optimizer=sgd, metrics=['accuracy'])

history = model.fit(X_train,y_train,validation_data=(X_test,y_test), epochs=epochs, batch_size=32)

from keras.models import model_from_json

model_json = model.to_json()
with open("model.json", "w") as json_file:
    json_file.write(model_json)

model.save_weights(os.path.join(os.getcwd(),'model.h5'))

scores = model.evaluate(X_test,y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))

But everytime I run my program it gives me value error which is not understandable for me. I already have tried 'y_test= y_test.reshape(14939,IMG_SIZE,IMG_SIZE,3)' but still it gives me the same error.

Error

Traceback (most recent call last):
  File "c:/Users/zeele/OneDrive/Desktop/googleNet_Architecture.py", line 149, in <module>
    history = model.fit(X_train,y_train,validation_data=(X_test,y_test), epochs=epochs, batch_size=32)
  File "C:\Users\zeele\Miniconda3\lib\site-packages\keras\engine\training.py", line 1405, in fit
    batch_size=batch_size)
  File "C:\Users\zeele\Miniconda3\lib\site-packages\keras\engine\training.py", line 1299, in _standardize_user_data
    exception_prefix='model target')
  File "C:\Users\zeele\Miniconda3\lib\site-packages\keras\engine\training.py", line 121, in _standardize_input_data
    str(array.shape))
ValueError: Error when checking model target: expected dense_1 to have 2 dimensions, but got array with shape (14939, 224, 224, 3)

Please help me through this.

Thank you.

Upvotes: 0

Views: 72

Answers (1)

desertnaut
desertnaut

Reputation: 60321

What is certain, is that the shapes of your data are not correct/consistent; since

print(X_train.shape)
(16016, 224, 224, 3)

one would certainly expect that X_test.shape is qualitatively similar, with a difference only in the number of samples, i.e. something of the form (NUM_TEST_SAMPLES, 224, 224, 3); But what you report is:

print(X_test.shape)
(16016, 1, 163)

which looks more like the expected shape of your labels (i.e. y_train.shape).

Notice also that the length of your data & labels must be the same for training & test sets, which again is not the case here: for both training & test sets, you report 16,016 samples of data and only 14,939 labels.

My guess is that most probably you have made a (frequent enough) mistake when splitting your data into training & test sets using scikit-learn's train_test_split (see the docs):

# WRONG ORDER:
X_train, y_train, X_test, y_test = train_test_split(X, y, test_size=0.33, random_state=42)

# CORRECT ORDER:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)

Upvotes: 1

Related Questions