Lakwin Chandula
Lakwin Chandula

Reputation: 159

Image Classification with TensorFlow and Keras

from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K


# dimensions of our images.
img_width, img_height = 150, 150


train_data_dir = 'flowers/train'
validation_data_dir = 'flowers/validation'
nb_train_samples = 2500
nb_validation_samples = 1000
epochs = 20
batch_size = 50


if K.image_data_format() == 'channels_first':
    input_shape = (3, img_width, img_height)
else:
    input_shape = (img_width, img_height, 3)


model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))


model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))


model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))


model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(5))
model.add(Activation('softmax'))


model.compile(loss='categorical_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])


# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
    rescale=1. / 255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True)

# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory(
    train_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='categorical')


validation_generator = test_datagen.flow_from_directory(
    validation_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='categorical')


model.fit_generator(
    train_generator,
    steps_per_epoch=nb_train_samples // batch_size,
    epochs=epochs,
    validation_data=validation_generator,
    validation_steps=nb_validation_samples // batch_size)


model.save_weights('first_flowers_try.h5')

We trained this model for classify 5 image classes. We used 500 images for each class for train the model and 200 images for each class to validate the model. We used keras in tensorflow backend.It uses data that can be downloaded at: https://www.kaggle.com/alxmamaev/flowers-recognition

In our setup, we:

How can we predict/ test and identify another image using this trained model?

Upvotes: 5

Views: 854

Answers (3)

Yannis
Yannis

Reputation: 721

As per keras' documentation, you will have to use predict(self, x, batch_size=None, verbose=0, steps=None). Since you use the Softmax as an activation function in your final layer, this will return the probability of each class. If you just want the most probable class, you will have to take the one with the highest probability:

class_list = [class1, class2, class3, class4, class5] #A list of your classes
model.load_weights('first_flowers_try.h5') #Loads the saved weights
predicted_vector = model.predict(path_to_your_new_image) #Vector with the prob of each class
print(class_list[np.argmax(predicted_vector)) #Prints ellement with highest prob

Now, about getting class_list, you can try this:

import os
class_list = os.listdir('train')
class_list = sorted(class_list)

Let me know if this worked.

Upvotes: 0

Ioannis Nasios
Ioannis Nasios

Reputation: 8537

Construct your model as you did upon training

model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))


model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))


model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))


model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(5))
model.add(Activation('softmax'))


model.compile(loss='categorical_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

Load model' weights from disk

model.load_weights('first_flowers_try.h5')

Load new image. Because we are only using one image we have to expand dims - add another dimension.

from keras.preprocessing import image

img_path = 'path_to_your_new_image'
#img = image.load_img(img_path, target_size=(224, 224)) # if a you want a spesific image size
img = image.load_img(img_path)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = x*1./255 #rescale as training

Make Prediction

prediction = model.predict(x) #Vector with the prob of each class

Upvotes: 2

lenik
lenik

Reputation: 23556

You have to model.load_weights() from the file you saved them to. Then you get a sample image you need a prediction for and call model.predict( [sample_image] ) and use the result returned as a prediction.

Upvotes: 2

Related Questions