Reputation: 1702
I've created a neural network of the following form in keras
:
from keras.layers import Dense, Activation, Input
from keras import Model
input_dim_v = 3
hidden_dims=[100, 100, 100]
inputs = Input(shape=(input_dim_v,))
net = inputs
for h_dim in hidden_dims:
net = Dense(h_dim)(net)
net = Activation("elu")(net)
outputs = Dense(self.output_dim_v)(net)
model_v = Model(inputs=inputs, outputs=outputs)
model_v.compile(optimizer='adam', loss='mean_squared_error', metrics=['mse'])
Later, I train it on single examples using model_v.train_on_batch(X[i],y[i])
.
To test, whether the neural network is becoming a better function approximator, I wanted to evaluate the model on the accumulated X
and y
(in my case, X
and y
grow over time) periodically. However, when I call model_v.evaluate(X, y)
, only the characteristic progress bars appear in the console, but neither the loss value nor the mse-metric (which are the same in this case) are printed.
How can I change that?
Upvotes: 3
Views: 2875
Reputation: 33420
The loss and metric values are not shown in the progress bar of evaluate()
method. Instead, they are returned as the output of the evaluate()
method and therefore you can print them:
for i in n_iter:
# ... get the i-th batch or sample
# ... train the model using the `train_on_batch` method
# evaluate the model on whole or part of test data
loss_metric = model.evaluate(test_data, test_labels)
print(loss_metric)
According to the documentation, if your model has multiple outputs and/or metrics, you can use model.metric_names
attribute to find out what the values in loss_metric
correspond to.
Upvotes: 5