Reputation: 445
I am training a CNN in Keras with Tensorflow backend,
mod1=gmodel.fit(images, train_labels,
batch_size=100,
epochs=2,
verbose=1,
validation_data=(test_images, test_labels))
and at every epoch I can see printed in the output the accuracy and loss (until here everything seems ok).
Epoch 1/10
1203/1203 [==============================] - 190s - loss: 0.7600 - acc: 0.5628
- val_loss: 0.5592 - val_acc: 0.6933
Epoch 2/10
1203/1203 [==============================] - 187s - loss: 0.5490 - acc: 0.6933
- val_loss: 0.4589 - val_acc: 0.7930
Epoch 3/10
....
At the end, I want to plot the validation loss so in previous projects I have accessed the validation loss via
mod1.history['val_loss']
but I am getting an error as if .history()
was empty.
TypeError Traceback (most recent call last)
<ipython-input-23-ecdd306e9232> in <module>()
----> 1 modl.history()
TypeError: 'History' object is not callable
EDIT (after answer below): When I try to access the loss, for example:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-34-06fcc6efb374> in <module>()
----> 1 mod1.history['val_loss']
TypeError: 'History' object is not subscriptable
I haven't found anything like this problem before, so I am lost as to what could be happening or how to debug.
Any pointers or ideas are greatly appreciated.
Upvotes: 3
Views: 15276
Reputation: 1
Try this, it will definitely work.
Epoches = 30
Validation = (x_valid,y_valid)
model_clf.fit(x_train,y_train,epochs=Epoches, validation_data=Validation, batch_size=20 )
model_clf.history.params
model_clf.history.history
#### convert into datafarame
pd.DataFrame(model_clf.history.history)
#### if u want to draw the graph
pd.DataFrame(model_clf.history.history).plot(figsize=(15,7))
plt.grid(True)
plt.show()
Upvotes: 0
Reputation: 31
When model is fitted it will return a history object, you cannot call()
or subscript it directly like history['loss']
.
If you fitted it using model.fit()
then you have to query as follows
model.history.history.keys()
-> will give you ['acc','loss','val_acc','val_loss']
If you monitored loss and mentioned metrics as accuracy during compile process.
You can access all metrics using same format ,for ex :model.history.history['acc']
But if you fitted a model and assigned history object to a local variable like this history = model.fit(X, Y)
then the mode of access would be
history.history['acc']
history.history['val_acc']
Here we don't need to mention model
object as history
object is now saved in a local variable.
And also don't forget to add validation data or use validation split parameter of fit to access the validation metrics.
Upvotes: 1
Reputation: 169
model.fit(x_train, y_train,batch_size=128,validation_data=(x_test, y_test))
vy = model.history.history['val_loss']
ty = model.history.history['loss']
Please use the validation_data in model.fit statement for test data then the only "model.history.history" will come
Reference: https://keras.io/callbacks/
Upvotes: 3
Reputation: 60370
Although you say you have called mod1.history['val_loss']
, your error message tells a different story - most probably, as Daniel Moller has already commented, you have in fact used something like mod1.history()
(i.e. with parentheses). Here is what I get (Python 3.5):
mod1.history()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-20-67bafe3187cc> in <module>()
----> 1 mod1.history()
TypeError: 'dict' object is not callable
mod1.history
is not a function to be called with ()
, but a Python dictionary:
mod1.history
# result:
{'acc': [0.82374999999999998,
0.94294999999999995,
0.95861666666666667,
...],
'loss': [0.62551526172161098,
0.18810810926556587,
0.13734668906728426,
...],
'val_loss': [12.05395287322998,
11.584557554626464,
10.949809835815429,
...]}
mod1.history['val_loss']
# result:
[12.05395287322998,
11.584557554626464,
10.949809835815429,
...]
Upvotes: 1