Ke Zhu
Ke Zhu

Reputation: 225

model.evaluate_generator is giving wrong accuracy

I have trained a ResNet model with Keras. When trying to debug the problem, I found the accuracy reported is different from what I calculate manually.

The model is compiled with

optimizer = keras.optimizers.Adam(lr=0.001)
base_model = ResNet50(weights=None, include_top=False, input_shape=(256,256,3))
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dropout(0.5)(x)
predictions = Dense(len(classes), activation='softmax')(x)


filepath="model-improvement-{epoch:02d}-{val_acc:.2f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
tensorboard = TensorBoard(log_dir="logs/{}".format(time()), write_graph=False, update_freq="batch" )
callbacks_list = [checkpoint, tensorboard]

model = Model(inputs=base_model.input, outputs=predictions)

model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])

test_steps_per_epoch = numpy.math.ceil(data_it.samples / data_it.batch_size)

predictions = model.predict_generator(data_it, steps=test_steps_per_epoch)
predicted_classes = numpy.argmax(predictions, axis=1) 
print(model.evaluate_generator(data_it, steps=test_steps_per_epoch))

The above gives

[0.3230868512656041, 0.921268782482911]

When I check manually:

true_classes = data_it.classes
print(numpy.mean(true_classes == predicted_classes))

the result is

0.6125515727317461

Upvotes: 1

Views: 134

Answers (1)

Ke Zhu
Ke Zhu

Reputation: 225

Found the issue. The data_generator was set to shuffle=True therefore my true_classes were wrong

Upvotes: 1

Related Questions