Reputation: 3616
I use Tensorflow Keras to train a neural network. Currently I use the following callback to reduce the learning rate over the course of training:
def learning_rate_scheduler(lr, epoch):
return lr * tf.math.exp(-0.1)
I use the callback as follows:
callback = tf.keras.callbacks.LearningRateScheduler(learning_rate_scheduler)
model.fit(x_train, y_train, epochs=10, callbacks=[callback], verbose=2)
This works as expected. With this approach, however, the learning rate is reduced only once every epoch. I would like to know how I can modify this callback so that it is called n
times per epoch and not only once? Is that possible?
Upvotes: 0
Views: 1039
Reputation: 1134
To do this, you will need to create a custom callback so you have access to batch related methods. When you inherit from tf.keras.callbacks.Callback
, you can override on_train_batch_end
and set the learning rate on each batch. If you want to do it every N
steps, then you can just add a counter
property and increment it every time on_train_batch_end
is called. Then, only set the learning rate if self.counter % N == 0
. Some boilerplate code could look like this.
class LearningRateSchedule(tf.keras.callbacks.Callback):
def __init__(self, N):
super(LearningRateShedule, self).__init__()
self.N = N
def on_train_begin(self, logs=None):
self.step = 0
def on_train_batch_end(self, batch, logs=None):
self.step += 1
lr = self.get_lr()
if self.step % self.N == 0:
# Set learning rate for model
tf.keras.backend.set_value(self.model.optimizer.lr, lr)
def get_lr(self):
# Function to get learning rate
return lr
Upvotes: 1