Meteor
Meteor

Reputation: 41

face recognition on tensorflow convolution neural network only gets the accuracy 0.05

os:win10

face database:yale face database(15 different people, totally about 160 images)

programming language : python on tensorflow

I use the tensorflow to do face recognition by CNN, but the accuracy is only about 0.05. (In convolution layer ,there was no padding) The network structure is : Conv1-->max pooling-->Conv2-->max pooling-->full connect(15 output)

the code as followings: some definition just like tensorflow examples:

def weight_variable(shape):
    initial = tf.truncated_normal(shape, stddev=0.1)
    return tf.Variable(initial)

def bias_variable(shape):
    initial = tf.constant(0.1, shape=shape)
    return tf.Variable(initial)

def conv2d(x, W):
    return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding="VALID") # no padding

def max_pool_2x2(x):
    return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
                         strides=[1, 2, 2, 1], padding="VALID")  # no padding 

The first Conv layer:

# first layer
SHAPE = [None, 64, 64, 1]
Y_SHAPE = [None, 15]
x = tf.placeholder(tf.float32, shape=SHAPE, name="x_data")
y = tf.placeholder(tf.float32, shape=Y_SHAPE, name="y_true")

W1_shape = [7, 7, 1, 6]
b1_shape = [6]
with tf.name_scope("Conv1"):
    W_conv1 = weight_variable(W1_shape)
    b_conv1 = bias_variable(b1_shape)
    tf.summary.histogram("weights", W_conv1)
#     tf.summary.histogram("bias", b_conv1)

    a_conv1 = tf.nn.relu(conv2d(x, W_conv1) + b_conv1)
    a_pool1 = max_pool_2x2(a_conv1)

    # a_pool1 shape : (29, 29, 6)



# second layer
W2_shape = [8, 8, 6, 16]
b2_shape = [16]
with tf.name_scope("Conv2"):
    W_conv2 = weight_variable(W2_shape)
    b_conv2 = bias_variable(b2_shape)
    tf.summary.histogram("weights", W_conv2)
#     tf.summary.histogram("bias", b_conv2)

    a_conv2 = tf.nn.relu(conv2d(a_pool1, W_conv2) + b_conv2)
    a_pool2 = max_pool_2x2(a_conv2)

    # a_pool2 shape (11, 11, 16)



# full connect
W_out_shape = [11*11*16, 15]
b_out_shape = [15]

with tf.name_scope("sigmoid"):
    W_out= weight_variable(W_out_shape)
    b_out = bias_variable(b_out_shape)

    a_pool2_flat = tf.reshape(a_pool2, [-1, 11*11*16])
    z_out = tf.matmul(a_pool2_flat, W_out) + b_out

    a_out = tf.nn.sigmoid(z_out)



# train and evaluate
loss = tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=a_out)

batch_size = 40
train_index = np.arange(90)

train_step = tf.train.GradientDescentOptimizer(0.001).minimize(loss)

correct_prediction = tf.equal(tf.argmax(a_out, 1), tf.argmax(y, 1))

accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for i in range(1000):    # epochs=1000
        # index shuffle
        np.random.shuffle(train_index)
        batch_train = train_data[train_index[:batch_size]] 
        batch_labels = train_labels[train_index[:batch_size]]

        if i % 10 == 0:   # print accuracy each ten epoches
            train_accuracy = accuracy.eval(feed_dict={x:batch_train, y:batch_labels})
            print("step %d, train accuracy %g"%(i, train_accuracy))

        _, loss_ = sess.run([train_step, loss], feed_dict={x:batch_train, y:batch_labels})

    test_index = np.arange(74)
    np.random.shuffle(test_index)
    print("test accuracy:", sess.run(accuracy, feed_dict={x:test_data[test_index], y:test_labels[test_index]}))

writer.close()

The following pictures are my output:

enter image description here

enter image description here

Upvotes: 0

Views: 525

Answers (1)

lejlot
lejlot

Reputation: 66795

The main problem is incorrect network structure

a_out = tf.nn.sigmoid(z_out)
loss = tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=a_out)

should be

a_out = z_out
loss = tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=a_out)

softmax_cross_entropy_with_logits applies softmax internally, so applying sigmoid beforehand makes no sense (and makes training much harder if not impossible). In your current setting, the probability of a single class lies in [0, 0.345] instead of [0, 1], as with fully saturated sigmoids the softmax is:

exp(1) / (14*exp(-1) + exp(1)) ~= 0.345

Two other problems are:

  • learning rate used seems completely arbitrary, you might want to switch yo Adam which is less sensitive to invalid learning rates
  • initialisation scheme seems arbitrary too, you might want to reduce the std.
  • instead of printing accuracy - print the loss. If it is not going down on the training set then you have errors in the training. If it is, but too slowly - adjust the learning rate and so on.

Upvotes: 1

Related Questions