darvida
darvida

Reputation: 59

CNN: Normal that the validation loss decreases much slower than training loss?

i'm training a CNN U-net model for semantic segmentation of images, however the training loss seems to decrease in a much faster rate than the validation loss, is this normal?

I'm using a loss of 0.002

The training and validation loss can be seen in the image bellow: Loss for training and validation

Upvotes: 3

Views: 2850

Answers (1)

Derlin
Derlin

Reputation: 9871

Yes, this is perfectly normal.

As the NN learns, it infers from the training samples, that it knows better at each iteration. The validation set is never used during training, this is why it is so important.

Basically:

  • as long as the validation loss decreases (even slightly), it means the NN is still able to learn/generalise better,
  • as soon as the validation loss stagnates, you should stop training,
  • if you keep training, the validation loss will likely increase again, this is called overfitting. Put simply, it means the NN learns "by heart" the training data, instead of really generalising to unknown samples (such as in the validation set)

We usually use early stopping to avoid the last: basically, if your validation loss doesn't improve in X iterations, stop training (X being a value such as 5 or 10).

Upvotes: 5

Related Questions