NTH
NTH

Reputation: 21

EarlyStopping keras with consecutively epoch

I used early stopping in keras with early_stopping = kr.callbacks.EarlyStopping(monitor='val_loss', patience=10, verbose=1, mode='auto') However, I got the result as below:

enter image description here

The val_loss at epoch=2 is 0.6683, val of epoch=3 > 0.6683, but the training begun to work and the val begun to decrease from epoch 3. When I set patience=10, I would like the training will stop with 10 consecutively epochs with no improvement, not only compare epoch 2 and epoch 12 like that. Could someone know how to solve this? Thank you.

Upvotes: 1

Views: 1156

Answers (2)

NTH
NTH

Reputation: 21

I fixed, just modified keras.callbacks.EarlyStopping:

Replaced

def on_epoch_end(self, epoch, logs=None):
    ...
    if self.monitor_op(current - self.min_delta, self.best):
        self.best = current
        self.wait = 0
    else:
        self.wait += 1
        if self.wait >= self.patience:
            self.stopped_epoch = epoch
            self.model.stop_training = True
    ...

by

def on_epoch_end(self, epoch, logs=None):
    ...
    if epoch == 1:
        self.previous=logs.get(self.monitor)

    #if val_loss of current < previous, set wait=0
    #if self.monitor_op(current - self.min_delta, self.best):

    if self.monitor_op(current - self.min_delta, self.previous):
        # if current < self.previous:       
        self.wait = 0
    else: #if val_loss of current > previous, that means performance pause improving, then set wait+=1
        self.wait += 1
        print 'now: ' +str(current) + ', pre: ' +str(self.previous) + 'not improved! wait:' +  str(self.wait)
        if self.wait >= self.patience: #if wait reach limitation, then stopping training
                self.stopped_epoch = epoch
                self.model.stop_training = True      
    if epoch > 1:
            self.previous=logs.get(self.monitor)
    ...

Upvotes: 1

Simon Batzner
Simon Batzner

Reputation: 133

You can write your own callback according to the Keras Documentation. What you would want to do in your case is to edit the on_epoch_end() method from the Keras version of EarlyStopping. Instead of keeping track of the best value, you want to keep track of the previous value and see if it doesn't improve for 10 consecutive times.

Upvotes: 0

Related Questions