NielsNL4
NielsNL4

Reputation: 640

Why is my validation accuracy stuck around 65% and how do i increase it?

I'm making an image classification CNN with 5 classes with each having 693 images with a width and height of 224px using VGG16, but my validation accuracy is stuck after 15-20 epochs around 60% - 65%.

I'm already using some data augmentation, batch normalization, and dropout and I have frozen the first 5 layers but I can't seem to increase my accuracy more than 65%.

these are my own layers

img_rows, img_cols, img_channel = 224, 224, 3

base_model = applications.VGG16(weights='imagenet', include_top=False, input_shape=(img_rows, img_cols, img_channel))
for layer in base_model.layers[:5]:
    layer.trainable = False

add_model = Sequential()
add_model.add(Flatten(input_shape=base_model.output_shape[1:]))
add_model.add(Dropout(0.5))
add_model.add(Dense(512, activation='relu'))
add_model.add(BatchNormalization())
add_model.add(Dropout(0.5))
add_model.add(Dense(5, activation='softmax'))

model = Model(inputs=base_model.input, outputs=add_model(base_model.output))
model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizers.Adam(lr=0.0001),
              metrics=['accuracy'])

model.summary()

and this is my dataset with my model

batch_size = 64
epochs = 25

train_datagen = ImageDataGenerator(
        rotation_range=30,
        width_shift_range=.1,
        height_shift_range=.1, 
        horizontal_flip=True)
train_datagen.fit(x_train)


history = model.fit_generator(
    train_datagen.flow(x_train, y_train, batch_size=batch_size),
    steps_per_epoch=x_train.shape[0] // batch_size,
    epochs=epochs,
    validation_data=(x_test, y_test),
    callbacks=[ModelCheckpoint('VGG16-transferlearning.model', monitor='val_acc', save_best_only=True)]
)

I want to get a higher accuracy because what I get now is just not enough so any help or suggestions would be appreciated.

Upvotes: 3

Views: 6342

Answers (1)

Daniel Valcarce
Daniel Valcarce

Reputation: 101

A few things you can try are:

  • Reduce your batch size.
  • Choose another optimizer: RMSprop, SGD...
  • Increase the learning rate by default and then use the callback ReduceLROnPlateau

But, as usual, it depends on the data you are using. Are well balanced?

Upvotes: 4

Related Questions