Reputation: 1721
why is when len(trainData) = 75 The execution Verbose showing a number greater than 75 ?
My Model Execution Script
batch_size=64
h5_path = "EPOC_1_Feb_25_model.h5"
checkpoint = ModelCheckpoint(h5_path, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
history = model.fit_generator(
data_gen(train2, id_label_map, batch_size, augment=True),
validation_data=data_gen(val, id_label_map, batch_size),
epochs=1, verbose=1,
callbacks=[checkpoint],
steps_per_epoch=len(train) // batch_size,
validation_steps=len(val) // batch_size)
model.load_weights(h5_path)
The Execution Verbose
Epoch 1/1
7658/9409 [=======================>......]
given len(train2) = 75
Why is this showing 7658/9409 ?
My Data Generator is
def data_gen(list_files, id_label_map, batch_size, augment=False):
seq = get_seq()
while True:
shuffle(list_files)
for batch in chunker(list_files, batch_size):
X = [cv2.imread(x) for x in batch ]
Y = [id_label_map.get(x) for x in batch]#[id_label_map[get_id_from_file_path(x)] for x in batch]
if augment:
X = seq.augment_images(X)
X = [preprocess_input(x) for x in X]
yield np.array(X), np.array(Y)
def chunker(seq, size):
return (seq[pos:pos + size] for pos in range(0, len(seq), size))
Upvotes: 0
Views: 39
Reputation: 138
In steps_per_epoch = len(train) //batch_size
instead of steps_per_epoch = len(train2) //batch_size
Because of this you are getting more numbers in data I guess
Upvotes: 1
Reputation: 2453
I think it comes from your data augmentation. Try setting:
augment=False
In your data_gen
call to test it.
Upvotes: 0