ZigZagZebra
ZigZagZebra

Reputation: 1459

Keras record loss and accuracy of train and test for each batch

I am using Keras to train a cnn and I need to record accuracy and loss for each batch. Is there any way to save the statistics? Following is the code I am using but the accuracy is none. Also it looks like the callback is suppressing progress bar.

class Histories(keras.callbacks.Callback):
def __init__(self, test_data):
    self.test_data = test_data

def on_train_begin(self, logs={}):
    self.train_acc = []
    self.test_acc = []
    self.train_loss = []
    self.test_loss = []

def on_batch_end(self, batch, logs={}):
    train_loss_batch = logs.get('loss')
    train_acc_batch = logs.get('accuracy')
    self.train_loss.append(train_loss_batch)
    self.train_acc.append(train_acc_batch)
    print('\nTrain loss: {}, acc: {}\n'.format(train_loss_batch, train_acc_batch))

    x, y = self.test_data
    test_loss_batch, test_acc_batch = self.model.evaluate(x, y, verbose=0)
    self.test_loss.append(test_loss_batch)
    self.test_acc.append(test_acc_batch)
    print('\nTesting loss: {}, acc: {}\n'.format(test_loss_batch, test_acc_batch))

To use the callback:

histories = my_callbacks.Histories((x_test, y_test))
model.fit(x_train_reduced, y_train_reduced, batch_size, epochs, verbose=1, callbacks=[histories])

Upvotes: 6

Views: 2821

Answers (1)

hedgehogues
hedgehogues

Reputation: 237

I have the same problem. I need each time after calculating the gradient on the batch, counting the loss for the validation set and for the set set.

In the Keras API there are remarkable parameters:

steps_per_epoch, validation_steps

They set the number of examples for the era and validation, respectively. So, I wanted to set the size of the epoch in 20 examples, thereby artificially equating it to the size of batch_size. After that I create a callback, which is processed every time after the batch processing is completed:

class LossHistory(Callback):
    def __init__(self):
        super(Callback, self).__init__()
        self.losses = []
        self.val_losses = []

    def on_train_begin(self, logs=None):
        self.losses = []
        self.val_losses = []

    def on_batch_end(self, batch, logs=None):
        self.losses.append(logs.get('loss'))
        self.val_losses.append(logs.get('val_loss'))

About this bug I wrote here. So far I'm waiting for an answer. But I have a problem that still requires a solution.

Due to the fact that there is no simple answer to this question, it is necessary to crut. For this, you can refer to members of the class Callback. Here lies the validation set, on which you can test. This is done in this way:

class LossHistory(Callback):
    def __init__(self):
        super(Callback, self).__init__()
        self.losses = []
        self.val_losses = []

    def on_train_begin(self, logs=None):
        self.losses = []
        self.val_losses = []

    def on_batch_end(self, batch, logs=None):
        self.losses.append(logs.get('loss'))
        self.val_losses.append(self.model.evaluate(self.validation_data[0], self.validation_data[1]))

P.s. logs.get (val_loss) is considered after each era. In this regard, in it, at the first batch of the first era will lie None.

Upvotes: 3

Related Questions