Reputation: 407
I am trying to use a custom metrics for my neural network and this metric should only be evaluated at the end of the epoch. The problem that I encounter is that the metrics is evaluated at each batch which is not the behaviour wanted. Note that I am working with generators and fit_generator
with keras.
validation_data are loaded with a generator that implements keras.utils.Sequence
class DataGenerator(keras.utils.Sequence):
def __init__(self, inputs, labels, batch_size):
self.inputs = inputs
self.labels = labels
self.batch_size = batch_size
def __getitem__(self, index):
#some processing done here
return batch_inputs, batch_labels
def __len__(self):
return int(np.floor(len(self.inputs) / self.batch_size))
I tried to implement what the keras documentation suggests but I did not find any info to specify the metric should only used at the end of epoch.
def auc_roc(y_true, y_pred):
auc, up_opt = tf.metrics.auc(y_true, y_pred)
K.get_session().run(tf.local_variables_initializer())
with tf.control_dependencies([up_opt]):
auc = tf.identity(auc)
return auc
So right now the auc_roc
is called after each batch instead of a single call at the end of the epoch
.
Upvotes: 5
Views: 2648
Reputation: 2060
from sklearn.metrics import roc_auc_score
from keras.callbacks import Callback
class IntervalEvaluation(Callback):
def __init__(self, validation_data=(), interval=10):
super(Callback, self).__init__()
self.interval = interval
self.X_val, self.y_val = validation_data
def on_epoch_end(self, epoch, logs={}):
if epoch % self.interval == 0:
y_pred = self.model.predict_proba(self.X_val, verbose=0)
score = roc_auc_score(self.y_val, y_pred)
print("interval evaluation - epoch: {:d} - score: {:.6f}".format(epoch, score))
Usage:
ival = IntervalEvaluation(validation_data=(x_test2, y_test2), interval=1)
More Info: http://digital-thinking.de/keras-three-ways-to-use-custom-validation-metrics-in-keras/
Upvotes: 3