Franva
Franva

Reputation: 7077

Fast.Ai EarlyStoppingCallback does not work

enter image description here

callbacks = [EarlyStoppingCallback(learn, monitor='error_rate', min_delta=1e-5, patience=5)]
learn.fit_one_cycle(30, callbacks=callbacks, max_lr=slice(1e-5,1e-3))

As you can see, I use patience = 5 and min_delta=1e-5 and monitor='error_rate'

My understanding is: patience tells how many epochs it waits if improvement is less than min_delta on the monitored value, in this case it's error_rate.

So if my understanding was correct, then it would not stop at Epoch 6.

So is this my understanding wrong or the debug in fast.ai lib ?

Upvotes: 0

Views: 981

Answers (1)

conv3d
conv3d

Reputation: 2896

It keeps track of the best error rate and compares the min_delta to the difference between this epoch and that value:

class EarlyStoppingCallback(TrackerCallback):
...
if self.operator(current - self.min_delta, self.best):
    self.best,self.wait = current,0
else:
    self.wait += 1
    if self.wait > self.patience:
        print(f'Epoch {epoch}: early stopping')
        return {"stop_training":True}
...

So self.wait only increases if the decrease in error was large enough. Once the 5th time occurs it stops.

np.greater(0.000638 - 1e-5, 0.000729)
False

There does seem to be an issue though, because clearly if the error rate jumped very high we would not want to assign this to self.best. And I believe the point of this callback is to stop training if the error rate starts to increase - which right now it is doing the opposite.

So in TrackerCallback there might need to be a change in:

mode_dict['auto'] = np.less if 'loss' in self.monitor else np.greater

to

mode_dict['auto'] = np.less if 'loss' in self.monitor or 'error' in self.monitor else np.greater

Upvotes: 3

Related Questions