Reghunaath A A
Reghunaath A A

Reputation: 3210

Validation accuracy starts decresing after a certain point even with regularization in keras

I have built a model to categorize image into 10 categories. I'm using a small dataset (600 aprox image per each category). The validation accuracy does not exceed 60. I have tried regularization and dropout layer but still not able to improve. I tried it using transfer learning on tensorflow for poets 2 and got a final accuracy of 75 so I don't think the problem is because of the dataset. I have also tried the solutions from similar questions(adding regularization, dpropout, changing softmax to sigmoid) but it doesn't seem to work.

PS: I'm a beginner to Deep Learning

My model:

    model=tf.keras.models.Sequential()
    model.add(Conv2D(32,(3,3),input_shape=X.shape[1:]))
    model.add(Activation("relu"))
    model.add(MaxPooling2D(pool_size=(2,2)))

    model.add(Conv2D(32,(3,3)))
    model.add(Activation("relu"))
    model.add(MaxPooling2D(pool_size=(2,2)))

    model.add(Conv2D(64,(3,3)))
    model.add(Activation("relu"))
    model.add(MaxPooling2D(pool_size=(2,2)))

    model.add(Flatten())
    model.add(Dense(64,kernel_regularizer=regularizers.l2(0.01)))
    model.add(Activation("relu"))

    model.add(Dense(10,kernel_regularizer=regularizers.l2(0.01)))
    model.add(Activation('softmax'))

    model.compile(loss="sparse_categorical_crossentropy",optimizer="adam",metrics=['accuracy'])
    model.fit(X,Y,batch_size=16,validation_split=0.1,epochs=50,verbose=2)
    model.save('tm1.h5')

Output:

Train on 5877 samples, validate on 654 samples
Epoch 1/50
5877/5877 - 38s - loss: 2.2745 - accuracy: 0.2277 - val_loss: 2.0920 - val_accuracy: 0.2477
Epoch 2/50
5877/5877 - 17s - loss: 1.9706 - accuracy: 0.3362 - val_loss: 1.9955 - val_accuracy: 0.3318
Epoch 3/50
5877/5877 - 18s - loss: 1.8413 - accuracy: 0.4056 - val_loss: 1.9985 - val_accuracy: 0.3180
Epoch 4/50
5877/5877 - 17s - loss: 1.7733 - accuracy: 0.4514 - val_loss: 1.7391 - val_accuracy: 0.4526
Epoch 5/50
5877/5877 - 14s - loss: 1.6771 - accuracy: 0.4943 - val_loss: 1.7292 - val_accuracy: 0.4297
Epoch 6/50
5877/5877 - 14s - loss: 1.6172 - accuracy: 0.5159 - val_loss: 1.6708 - val_accuracy: 0.5076
Epoch 7/50
5877/5877 - 14s - loss: 1.5484 - accuracy: 0.5455 - val_loss: 1.6793 - val_accuracy: 0.4878
Epoch 8/50
5877/5877 - 13s - loss: 1.4945 - accuracy: 0.5642 - val_loss: 1.5690 - val_accuracy: 0.5535
Epoch 9/50
5877/5877 - 13s - loss: 1.4465 - accuracy: 0.5955 - val_loss: 1.5932 - val_accuracy: 0.5520
Epoch 10/50
5877/5877 - 12s - loss: 1.4056 - accuracy: 0.6149 - val_loss: 1.5437 - val_accuracy: 0.5673
Epoch 11/50
5877/5877 - 12s - loss: 1.3573 - accuracy: 0.6362 - val_loss: 1.5647 - val_accuracy: 0.5810
Epoch 12/50
5877/5877 - 12s - loss: 1.3086 - accuracy: 0.6701 - val_loss: 1.5582 - val_accuracy: 0.5933
Epoch 13/50
5877/5877 - 13s - loss: 1.2784 - accuracy: 0.6828 - val_loss: 1.5995 - val_accuracy: 0.5749
Epoch 14/50
5877/5877 - 13s - loss: 1.2406 - accuracy: 0.7019 - val_loss: 1.6150 - val_accuracy: 0.6131
Epoch 15/50
5877/5877 - 15s - loss: 1.1769 - accuracy: 0.7351 - val_loss: 1.7797 - val_accuracy: 0.5382
Epoch 16/50
5877/5877 - 14s - loss: 1.1676 - accuracy: 0.7422 - val_loss: 1.8158 - val_accuracy: 0.5642
Epoch 17/50
5877/5877 - 13s - loss: 1.1088 - accuracy: 0.7708 - val_loss: 1.7937 - val_accuracy: 0.5765
Epoch 18/50
5877/5877 - 15s - loss: 1.0763 - accuracy: 0.7885 - val_loss: 1.9044 - val_accuracy: 0.5612
Epoch 19/50
5877/5877 - 19s - loss: 1.0481 - accuracy: 0.8007 - val_loss: 1.8861 - val_accuracy: 0.5795
Epoch 20/50
5877/5877 - 14s - loss: 0.9871 - accuracy: 0.8222 - val_loss: 2.0031 - val_accuracy: 0.5765
Epoch 21/50
5877/5877 - 13s - loss: 0.9629 - accuracy: 0.8356 - val_loss: 2.0946 - val_accuracy: 0.5688
Epoch 22/50
5877/5877 - 15s - loss: 0.9392 - accuracy: 0.8455 - val_loss: 2.0742 - val_accuracy: 0.5795
Epoch 23/50
5877/5877 - 15s - loss: 0.9087 - accuracy: 0.8603 - val_loss: 2.1889 - val_accuracy: 0.5642
Epoch 24/50
5877/5877 - 16s - loss: 0.9055 - accuracy: 0.8583 - val_loss: 2.4053 - val_accuracy: 0.5489
Epoch 25/50
5877/5877 - 14s - loss: 0.8826 - accuracy: 0.8663 - val_loss: 2.3087 - val_accuracy: 0.5398
Epoch 26/50
5877/5877 - 14s - loss: 0.8849 - accuracy: 0.8724 - val_loss: 2.4014 - val_accuracy: 0.5428
Epoch 27/50
5877/5877 - 17s - loss: 0.8603 - accuracy: 0.8758 - val_loss: 2.3956 - val_accuracy: 0.5566
Epoch 28/50
5877/5877 - 15s - loss: 0.8523 - accuracy: 0.8770 - val_loss: 2.3809 - val_accuracy: 0.5520
Epoch 29/50
5877/5877 - 14s - loss: 0.8500 - accuracy: 0.8846 - val_loss: 2.5112 - val_accuracy: 0.5505
Epoch 30/50
5877/5877 - 12s - loss: 0.8411 - accuracy: 0.8863 - val_loss: 2.2699 - val_accuracy: 0.5459
Epoch 31/50
5877/5877 - 13s - loss: 0.8405 - accuracy: 0.8903 - val_loss: 2.4893 - val_accuracy: 0.5550
Epoch 32/50
5877/5877 - 13s - loss: 0.8420 - accuracy: 0.8926 - val_loss: 2.4964 - val_accuracy: 0.5489
Epoch 33/50
5877/5877 - 12s - loss: 0.8047 - accuracy: 0.8998 - val_loss: 2.6824 - val_accuracy: 0.5505
Epoch 34/50
5877/5877 - 15s - loss: 0.8118 - accuracy: 0.9028 - val_loss: 2.4617 - val_accuracy: 0.5535
Epoch 35/50
5877/5877 - 13s - loss: 0.8001 - accuracy: 0.9098 - val_loss: 2.2837 - val_accuracy: 0.5489
Epoch 36/50
5877/5877 - 15s - loss: 0.7888 - accuracy: 0.9030 - val_loss: 2.4703 - val_accuracy: 0.5703
Epoch 37/50
5877/5877 - 16s - loss: 0.7769 - accuracy: 0.9095 - val_loss: 2.4717 - val_accuracy: 0.5719
Epoch 38/50
5877/5877 - 15s - loss: 0.7812 - accuracy: 0.9057 - val_loss: 2.6211 - val_accuracy: 0.5443
Epoch 39/50
5877/5877 - 14s - loss: 0.7878 - accuracy: 0.9083 - val_loss: 2.5498 - val_accuracy: 0.5749
Epoch 40/50
5877/5877 - 14s - loss: 0.8238 - accuracy: 0.8977 - val_loss: 2.7981 - val_accuracy: 0.5398
Epoch 41/50
5877/5877 - 15s - loss: 0.7833 - accuracy: 0.9091 - val_loss: 2.6674 - val_accuracy: 0.5443
Epoch 42/50
5877/5877 - 13s - loss: 0.7170 - accuracy: 0.9309 - val_loss: 2.6951 - val_accuracy: 0.5703
Epoch 43/50
5877/5877 - 12s - loss: 0.7493 - accuracy: 0.9163 - val_loss: 2.4696 - val_accuracy: 0.5703
Epoch 44/50
5877/5877 - 13s - loss: 0.7903 - accuracy: 0.9056 - val_loss: 2.8673 - val_accuracy: 0.5336
Epoch 45/50
5877/5877 - 14s - loss: 0.7861 - accuracy: 0.9144 - val_loss: 2.6287 - val_accuracy: 0.5382
Epoch 46/50
5877/5877 - 16s - loss: 0.7284 - accuracy: 0.9248 - val_loss: 2.6651 - val_accuracy: 0.5367
Epoch 47/50
5877/5877 - 13s - loss: 0.7216 - accuracy: 0.9246 - val_loss: 2.5384 - val_accuracy: 0.5520
Epoch 48/50
5877/5877 - 13s - loss: 0.7890 - accuracy: 0.9044 - val_loss: 2.7023 - val_accuracy: 0.5398
Epoch 49/50
5877/5877 - 13s - loss: 0.7362 - accuracy: 0.9270 - val_loss: 2.9077 - val_accuracy: 0.5122
Epoch 50/50
5877/5877 - 13s - loss: 0.7080 - accuracy: 0.9309 - val_loss: 2.7464 - val_accuracy: 0.5627

Upvotes: 0

Views: 73

Answers (1)

Graeme Niedermayer
Graeme Niedermayer

Reputation: 124

This is a very clear sign of overfitting. So that is the keyword for googling.

Whenever the training data keeps improving and the validation data doesn't improve or gets worse you are dealing with overfitting.

Solutions overfitting include:

  • more data
  • simpler models (Less variables)
  • more representative data (seems like you have a pretty good representation)
  • regularization and dropout (great work!)

I'd suggest using a simpler model to start out with. Just half the sizes and see if the problem gets better or worse.

Also for troubleshooting, I've found it's much easier to use SGD than adam for optimizers. Adam converges much more quickly but when you don't understand what is happening in the first place it's better to use SGD.

Upvotes: 1

Related Questions