Reputation: 45
I was following the 2 class image classification tutorial here and wanted to convert it into a multi-class classifier.
I am trying to train a model to predict the brand of a watch from 17 classes. My accuracy after 50 epochs is only 21.88% so I'm not really sure where I am going wrong or if I am even doing this right.
Here is my code:
All the images are in their own separate folders under the /data or /valid folders.
Ex: ../watch finder/data/armani
Ex2: ../watch finder/data/gucci
import numpy as np
import matplotlib.pyplot as plt
import os
import cv2
from keras.utils import np_utils
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
import keras.optimizers
img_width, img_height = 210, 210
train_data_dir = 'C:/Users/Adrian/Desktop/watch finder/data'
validation_data_dir = 'C:/Users/Adrian/Desktop/watch finder/valid'
nb_train_samples = 4761
nb_validation_samples = 612
epochs = 50
batch_size = 16
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(17))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
model.fit_generator(
train_generator,
samples_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
This is my first epoch:
Epoch 1/50
18/18 [==============================] - 8s 422ms/step - loss: 4.1104 - accuracy: 0.0833 - val_loss: 2.8369 - val_accuracy: 0.0592
And this is my 50th/last epoch:
Epoch 50/50
18/18 [==============================] - 7s 404ms/step - loss: 2.4840 - accuracy: 0.2188 - val_loss: 3.0823 - val_accuracy: 0.1795
I am fairly certain I am doing something wrong here but I am really new to deep learning so I'm not sure what that something is. All the help is appreciated.
Also, each brand of watch has between 300-400 images and each image size is the same at 210x210.
Upvotes: 1
Views: 246
Reputation: 7442
There seems to be nothing wrong with your approach on a high level.
Has training stopped on the 50th epoch, or is it still learning? If so, then you might need to increase the learning rate so that it trains faster.
You need to also try different architectures and start tuning the hyperparameters.
Another point I'd like to make is that you have a really small number of images. Try using an established architecture for which you can find pretrained models for. These can help you significantly boost your performance.
One final note is that since you have 17 classes, if your model were predicting at random, you'd get an accuracy of just under 6%. This means that your model is at least learning something.
Upvotes: 2