Will Cowan
Will Cowan

Reputation: 91

How frequently should I be checking for over-fitting? And how can I automatically detect/act on it?

I am new to both ML and neural networks, and have been reading 'http://neuralnetworksanddeeplearning.com' which addresses over-fitting in the 3rd chapter.

The author makes his example model work on his test set after each epoch, so he can construct graphs to display training accuracy and test accuracy (or the change in cost).

I am currently editing my program so that my NN runs on a test set after every 2 epochs, and then I will construct a graph of epochs vs test accuracy. I would guess this is too frequent but I'm not sure...

The author stated that once change in test accuracy stops increasing, then over-fitting is occurring (or the cost on training set has also stopped increasing). So I think I am going to make code to detect when test accuracy plateaus so I can then automatically switch the training set (either completely new or a different variation of folds from k-fold cross validation) - but there may be a better way?

Thanks in advance for any advice and direction :)

Upvotes: 0

Views: 88

Answers (2)

Joe Smith
Joe Smith

Reputation: 191

Early stopping is a bit tricky if you're just starting out. It ties optimization together with regularization. It would be better to use L2 or dropout instead. Then you can optimize in one step and regularize in another step.

Upvotes: 1

nuric
nuric

Reputation: 11225

This is called Early Stopping when the test / validation loss stops improving. The Deep Learning Book Chapter 7 by Ian Goodfellow et. al. explains model regularisation including early stopping in section 7.8.

Keras has a callback to perform this check and stop the training and aptly named EarlyStopping. By default it will monitor val_loss and stop the training if stops improving.

Upvotes: 1

Related Questions