Reputation: 91
I have built two CNN classifiers using Adam optimizer. One of them I applied dropout (.05) and the second one without dropout.I have got the below accuracy and loss values for each case, which one is performing better? I noticed that both of them have a comparable accuracy, but the classifier with the dropout had better and less fluctuate loss results.
Below, the first picture for the classifier with dropout (0.5) enabled and the second one is without the dropout enabled
Upvotes: 1
Views: 436
Reputation: 15003
The dropout that you added mitigates the overfitting effect; in essence, this is the reason why the loss graph does not oscillate so much like in the case of no dropout/any other regularization added.
Even if the accuracy on validation set may be slightly better(1-2% percent bigger) in case of the model without dropout/regularization, you should expect the second model (with dropout included) to perform better on unseen data (test set).
The dropout-model should be chosen; also, you could try to experiment with different threshold values of the dropout to check the performance. Also, it would be nice to have a test set to quickly verify any assumptions that you have.
Note here that you are using the validation set as test set, but they have different purposes. What you are actually showing is the training-validation loss/accuracies, not the training-test loss/accuracies.
Upvotes: 3