Reputation: 148
I am building a neural network for binary classification of images using Python and TensorFlow. I am using a Docker container to execute inside the code (this is to standarize my development and production environments), the base image is the oficial tensorflow image (here) and I just install the minimun required packages.
The problem is the next one, once I have trained the net I want to create a pipeline to feed it with new images, not using the flow_from_directory
function, passing each image individually.
The structure of the network is in this code as long as the process of saving it:
train_dir = 'data_set/train'
train_dir_A = os.path.join(train_dir, 'A')
train_dir_B = os.path.join(train_dir, 'B')
validation_dir = 'data_set/validation'
validation_dir_A = os.path.join(validation_dir, 'A')
validation_dir_B = os.path.join(validation_dir, 'B')
train_datagen = ImageDataGenerator( rescale = 1.0 / 255.0 )
dev_datagen = ImageDataGenerator( rescale = 1.0 / 255.0 )
test_datagen = ImageDataGenerator( rescale = 1.0 / 255.0 )
train_generator = train_datagen.flow_from_directory(train_dir,
batch_size = 20,
class_mode = 'binary',
target_size = (300,300))
validator_generator = test_datagen.flow_from_directory(validation_dir,
batch_size = 20,
class_mode = 'binary',
target_size = (300,300))
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1024, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
# train the model
model.compile(optimizer=RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit_generator(train_generator,
validation_data = validator_generator,
steps_per_epoch = 100,
epochs = 15,
validation_steps = 50,
verbose=1)
model.save('results/model_v0')
model.summary()
I am trying to pass a new image doing this:
image_data = (ndimage.imread(ROUTE_TO_NEW_IMAGE).astype(float))
image_data = image_data / 255
data = image_data[np.newaxis, ...]
# load model
model = load_model(ROUTE_MODEL)
# summarize model.
model.summary()
# Predict
model.predict(data)
The error message: ValueError: Error when checking input: expected conv2d_input to have shape (300, 300, 3) but got array with shape (375, 375, 4)
The question is how can I use these images as input?
Observations: the size of all the images (the original ones, the ones I am using) is the same in all the cases. I do not need color, is not a problem for me to adapt everything to work in grey scale.
Upvotes: 0
Views: 1680
Reputation: 36704
You will need to feed images of the same shape as the pictures the neural network was trained on. I suggest you do the following steps:
rgb
keras
expects 4D input for color images)All of that can be done with PIL
:
from PIL import Image
pic = Image.open('mypic.png').convert('RGB').resize((300, 300))
pic = np.array(pic)/255
pic = pic[np.newaxis, ...]
Upvotes: 2