Reputation: 15
So I compiled a model with this code:
def train(model,train_generator,test_generator):
optimizer = Adam(lr=0.0001,decay=1e-6)
model.compile(optimizer=optimizer,
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit_generator(train_generator,
epochs=100,
steps_per_epoch=28709 // BATCH_SIZE,
validation_steps=7178 // BATCH_SIZE,
validation_data=test_generator)
And I'm getting this:
Epoch 1/100
895/897 [============================>.] - ETA: 0s - loss: 1.6074 - accuracy: 0.3578WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 224 batches). You may need to use the repeat() function when building your dataset.
897/897 [==============================] - 12s 13ms/step - loss: 1.6068 - accuracy: 0.3581 - val_loss: 1.4521 - val_accuracy: 0.4432
Epoch 2/100
897/897 [==============================] - 10s 11ms/step - loss: 1.3438 - accuracy: 0.4825
Epoch 3/100
897/897 [==============================] - 10s 11ms/step - loss: 1.2086 - accuracy: 0.5401
Epoch 4/100
897/897 [==============================] - 10s 11ms/step - loss: 1.1010 - accuracy: 0.5804
Epoch 5/100
897/897 [==============================] - 10s 11ms/step - loss: 1.0069 - accuracy: 0.6204
I'm not able to see the val_loss at end of every epoch(except the first one).
what is missing in my code?
Does running it in Google Colab make any difference? Because I'm able to get val_loss in my PC!!
Thank you!!!
Upvotes: 1
Views: 486
Reputation: 650
Try reducing your batch_size
i.e. steps per epoch . It clearly states that your input has ran out of data.
Upvotes: 1