Reputation: 174
Is there anyway we can add some functionality to the ImageDataGenerator, so that the ImageDataGenerator can take a list of filenames, and random sample images for each minibatch?
I know that I can custom a class which inherit ImageDataGenerator class, but I still don't know the details how to do that.
Here is what I have done:
for epoch in range(epochs):
print("epoch is: %d, total epochs: %f" % ((epoch+1), int(epochs)))
print("prepare training batch...")
train_batch = makebatch(filelist=self.train_files, img_num=img_num, slice_times=slice_times)
print("prepare validation batch..")
val_batch = makebatch(filelist=self.val_files, img_num=int(math.ceil(img_num*0.2)), slice_times=slice_times)
x_train = train_batch
y_train = x_train
x_val = val_batch
y_val = x_val
print("generate training data...")
train_datagen.fit(x_train)
train_generator = train_datagen.flow(
x=x_train,
y=y_train,
batch_size=16)
val_datagen.fit(x_val)
val_generator = val_datagen.flow(
x=x_val,
y=y_val,
batch_size=16)
print("start training..")
history = model.fit_generator(
generator=train_generator,
steps_per_epoch=None,
epochs=1,
verbose=1,
validation_data=val_generator,
validation_steps=None,
callbacks=self.callbacks)
what I really want to obtain is that I can remove the for loop and the generator random sample images for each batch.
Someone can help with that?
Upvotes: 4
Views: 4385
Reputation: 982
Here, what I would do.
Suppose I have a list of paths to all images stored in variables X_train, X_validation and the labels are stored as y_train and y_validation.
First, I would define a sequence generator. ( This is from keras website)
from skimage.io import imread
from skimage.transform import resize
import numpy as np
# Here, `x_set` is list of path to the images
# and `y_set` are the associated classes.
class CIFAR10Sequence(Sequence):
def __init__(self, x_set, y_set, batch_size):
self.x, self.y = x_set, y_set
self.batch_size = batch_size
def __len__(self):
return int(np.ceil(len(self.x) / float(self.batch_size)))
def __getitem__(self, idx):
batch_x = self.x[idx * self.batch_size:(idx + 1) * self.batch_size]
batch_y = self.y[idx * self.batch_size:(idx + 1) * self.batch_size]
return np.array([
resize(imread(file_name), (200, 200))
for file_name in batch_x]), np.array(batch_y)
Now, I would define generator for training and validation as
Xtrain_gen = detracSequence(X_train,y_train,batch_size=512) # you can choose your batch size.
Xvalidation_gen = detracSequence(X_validation,y_validation,batch_size=512)
Now, final step to train the model
model.fit_generator(generator=Xtrain_gen, epochs=100, validation_data=Xvalidation_gen,use_multiprocessing=True)
This will avoid the for loop for you and it's very efficient because CPU fetch data in parallel.
Upvotes: 3