Tlokuus
Tlokuus

Reputation: 337

How to preserve the epoch number with Keras when performing multiple runs?

With a Keras model, I've included the TensorBoard callback to generate logs files to be visualised later.

The problem is that if I train my model multiple times, it generates multiple logs files, and the step number always restart to 0 instead of continuing on the last step of the previous run.

This leads to inexploitable graph in TensorBoard (screenshot below).

With raw Tensorflow, I've seen this could be solved by adding a "global_step" tensor to keep track of the epoch number between the runs.

But how can I do this using Keras ?

Glitchy graphs in Tensorboard

Upvotes: 6

Views: 1087

Answers (1)

BallpointBen
BallpointBen

Reputation: 13750

model.fit has an argument initial_epoch, 0 by default, that lets you tell the model which epoch it's starting at. You can use this to resume a previous training.

Upvotes: 5

Related Questions