Reputation: 5081
I am learning to use Tensorboard -- Tensorflow 2.0.
In particular, I would like to monitor the learning curves realtime and also to visually inspect and communicate the architecture of my model.
Below I will provide code for a reproducible example.
I have three problems:
Although I get the learning curves once the training is over I don't know what I should do to monitor them in real time
The learning curve I get from Tensorboard does not agree with the plot of history.history. In fact is bizarre and difficult to interpret its reversals.
I can not make sense of the graph. I have trained a sequential model with 5 dense layers and dropout layers in between. What Tensorboard shows me is something which much more elements in it.
My code is the following:
from keras.datasets import boston_housing
(train_data, train_targets), (test_data, test_targets) = boston_housing.load_data()
inputs = Input(shape = (train_data.shape[1], ))
x1 = Dense(100, kernel_initializer = 'he_normal', activation = 'elu')(inputs)
x1a = Dropout(0.5)(x1)
x2 = Dense(100, kernel_initializer = 'he_normal', activation = 'elu')(x1a)
x2a = Dropout(0.5)(x2)
x3 = Dense(100, kernel_initializer = 'he_normal', activation = 'elu')(x2a)
x3a = Dropout(0.5)(x3)
x4 = Dense(100, kernel_initializer = 'he_normal', activation = 'elu')(x3a)
x4a = Dropout(0.5)(x4)
x5 = Dense(100, kernel_initializer = 'he_normal', activation = 'elu')(x4a)
predictions = Dense(1)(x5)
model = Model(inputs = inputs, outputs = predictions)
model.compile(optimizer = 'Adam', loss = 'mse')
logdir="logs\\fit\\" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
history = model.fit(train_data, train_targets,
batch_size= 32,
epochs= 20,
validation_data=(test_data, test_targets),
shuffle=True,
callbacks=[tensorboard_callback ])
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.plot(history.history['val_loss'])
Upvotes: 4
Views: 6465
Reputation: 356
I think what you can do is to launch TensorBoard before calling .fit()
on your model. If you are using IPython (Jupyter or Colab), and have already installed TensorBoard, here's how you can modify your code;
from keras.datasets import boston_housing
(train_data, train_targets), (test_data, test_targets) = boston_housing.load_data()
inputs = Input(shape = (train_data.shape[1], ))
x1 = Dense(100, kernel_initializer = 'he_normal', activation = 'relu')(inputs)
x1a = Dropout(0.5)(x1)
x2 = Dense(100, kernel_initializer = 'he_normal', activation = 'relu')(x1a)
x2a = Dropout(0.5)(x2)
x3 = Dense(100, kernel_initializer = 'he_normal', activation = 'relu')(x2a)
x3a = Dropout(0.5)(x3)
x4 = Dense(100, kernel_initializer = 'he_normal', activation = 'relu')(x3a)
x4a = Dropout(0.5)(x4)
x5 = Dense(100, kernel_initializer = 'he_normal', activation = 'relu')(x4a)
predictions = Dense(1)(x5)
model = Model(inputs = inputs, outputs = predictions)
model.compile(optimizer = 'Adam', loss = 'mse')
logdir="logs\\fit\\" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
In another cell, you can run;
# Magic func to use TensorBoard directly in IPython
%load_ext tensorboard
Launch TensorBoard by running this in another cell;
# Launch TensorBoard with objects in the log directory
# This should launch tensorboard in your browser, but you may not see your metadata.
%tensorboard --logdir=logdir
And you can finally call .fit()
on your model in another cell;
history = model.fit(train_data, train_targets,
batch_size= 32,
epochs= 20,
validation_data=(test_data, test_targets),
shuffle=True,
callbacks=[tensorboard_callback ])
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
If you are not using IPython, you probably just have to launch TensorBoard during or before training your model to monitor it in real-time.
Upvotes: 5