jewelltaylor9430
jewelltaylor9430

Reputation: 119

Tensorflow TensorBoard not showing acc, loss, acc_val, and loss_val only only epoch_accuracy and epoch_loss

I am looking to have a TensorBoard display graphs corresponding to acc, loss, acc_val, and loss_val but they do not appear for some reason. Here is what I am seeing.

enter image description here

I am looking to have this: enter image description here

I am following the instruction here to be able to use tensorboard in a google colab notebook

This is the code used to generate the tensorboard:

opt = tf.keras.optimizers.Adam(lr=0.001, decay=1e-6)

tensorboard = TensorBoard(log_dir="logs/{}".format(NAME), 
                          histogram_freq=1, 
                          write_graph=True, 
                          write_grads=True, 
                          batch_size=BATCH_SIZE, 
                          write_images=True)

model.compile(
    loss='sparse_categorical_crossentropy',
    optimizer=opt,
    metrics=['accuracy']
)

# Train model
history = model.fit(
     train_x, train_y,
     batch_size=BATCH_SIZE,
     epochs=EPOCHS,
     validation_data=(validation_x, validation_y),
     callbacks=[tensorboard]
)

How do I go about solving this issue? Any ideas? Your help is very much appreciated!

Upvotes: 0

Views: 2105

Answers (1)

Vedanshu
Vedanshu

Reputation: 2296

That's the intended behavior. If you want to log custom scalars such as a dynamic learning rate, you need to use the TensorFlow Summary API.

Retrain the regression model and log a custom learning rate. Here's how:

  1. Create a file writer, using tf.summary.create_file_writer().
  2. Define a custom learning rate function. This will be passed to the Keras LearningRateScheduler callback.
  3. Inside the learning rate function, use tf.summary.scalar() to log the custom learning rate.
  4. Pass the LearningRateScheduler callback to Model.fit().

In general, to log a custom scalar, you need to use tf.summary.scalar() with a file writer. The file writer is responsible for writing data for this run to the specified directory and is implicitly used when you use the tf.summary.scalar().

logdir = "logs/scalars/" + datetime.now().strftime("%Y%m%d-%H%M%S")
file_writer = tf.summary.create_file_writer(logdir + "/metrics")
file_writer.set_as_default()

def lr_schedule(epoch):
  """
  Returns a custom learning rate that decreases as epochs progress.
  """
  learning_rate = 0.2
  if epoch > 10:
    learning_rate = 0.02
  if epoch > 20:
    learning_rate = 0.01
  if epoch > 50:
    learning_rate = 0.005

  tf.summary.scalar('learning rate', data=learning_rate, step=epoch)
  return learning_rate

lr_callback = keras.callbacks.LearningRateScheduler(lr_schedule)
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)

model = keras.models.Sequential([
    keras.layers.Dense(16, input_dim=1),
    keras.layers.Dense(1),
])

model.compile(
    loss='mse', # keras.losses.mean_squared_error
    optimizer=keras.optimizers.SGD(),
)

training_history = model.fit(
    x_train, # input
    y_train, # output
    batch_size=train_size,
    verbose=0, # Suppress chatty output; use Tensorboard instead
    epochs=100,
    validation_data=(x_test, y_test),
    callbacks=[tensorboard_callback, lr_callback],
)

Upvotes: 1

Related Questions