Reputation: 77
I am attempting to create a face recognition application by using CNN and dlib feature extractor. What I want to do is to extract the features from a bunch of photos of the same person, then send the arrays to my CNN which will produce a 2 class classifier for that person.
How can I change it to accept dlib feature arrays, how would the predict method look like and how should the data be formatted?
As of now, my network is configured to take images as inputs but I am unsure how to change it to work with feature arrays.
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
train_datagen = ImageDataGenerator(rescale=1. / 255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(train_data_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary', shuffle=True)
print(train_generator.class_indices)
validation_generator = test_datagen.flow_from_directory(validation_data_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary', shuffle=True)
print(validation_generator.class_indices)
model.fit_generator(train_generator, shuffle=True, steps_per_epoch=train_samples // batch_size, epochs=epochs, callbacks=[tensorboard], validation_data=validation_generator, validation_steps=validation_samples // batch_size)
model.save('Models/model.h5')
The way I want this to work is to use a program that extracts the features of each face in each photo into a file that my CNN can use to create the yes/no classifier file that later can be used for predictions.
Upvotes: 0
Views: 168
Reputation: 1084
This is a first try that surely needs more engineering. You can consider the first convolutional layers of a CNN as "feature extraction" layers, the last fully connected layers as "classification" layers.
import tensorflow as tf
import tensorflow.keras.layers as ll
i1 = ll.Input(input_shape1) #the images
x = ll.Conv2D(32, (3, 3),activation='relu')(i1)
x = ll.MaxPooling2D(pool_size=(2, 2))(x)
x = ll.Conv2D(32, (3, 3),activation='relu')(x)
x = ll.MaxPooling2D(pool_size=(2, 2))(x)
x = ll.Conv2D(64, (3, 3),activation='relu')(x)
x = ll.MaxPooling2D(pool_size=(2, 2))(x)
i2 = ll.Input(input_shape2) #the feature manually extracted
y = ll.Concatenate([x,i2])
y = ll.Flatten()(y)
y = ll.Dense(64,activation='relu')(y)
y = ll.Dropout(0.5)(y)
y = ll.Dense(1, activation='sigmoid')(y)
model = tf.keras.models.Model(inputs = [i1,i2], outputs = y)
Then compile and fit as usual, but you will need a generator to serve [i1,i2]
and replace the ImageDataGenerator
. If you want to use only the features and not the image then the architecture will be simpler: forget the convolutional part and just try a dense net.
Upvotes: 1