James
James

Reputation: 4052

TensorFlow simple XOR example not converging

I have the following code to learn a simple XOR network:

import tensorflow as tf
import numpy as np

def generate_xor(length=1000):
    x = np.random.randint(0,2, size=(length,2))
    y = []
    for pair in x:
        y.append(int(np.logical_xor(pair[0],pair[1])))
    return x, np.array(y)

n_inputs = 2
n_hidden = n_inputs*4
n_outputs = 1


x = tf.placeholder(tf.float32, shape=[1,n_inputs])
y = tf.placeholder(tf.float32, [1, n_outputs])

W = tf.Variable(tf.random_uniform([n_inputs, n_hidden],-1,1))
b = tf.Variable(tf.zeros([n_hidden]))

W2 = tf.Variable(tf.random_uniform([n_hidden,n_outputs],-1,1))
b2 = tf.Variable(tf.zeros([n_outputs]))

def xor_model(data):
    x = data
    hidden_layer = tf.nn.relu(tf.matmul(x,W)+b)
    output = tf.nn.relu(tf.matmul(hidden_layer, W2)+b2)
    return output

xor_nn = xor_model(x)
cost = tf.reduce_mean(tf.abs(xor_nn - y))
train_step = tf.train.AdagradOptimizer(0.05).minimize(cost)

init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)

x_data,y_data = generate_xor(length=100000)
errors = []
count = 0
out_freq = 1000
for xor_in, xor_out in zip(x_data,y_data):
    _, err = sess.run([train_step, cost], feed_dict={x:xor_in.reshape(1,2), y:xor_out.reshape(1,n_outputs)})
    errors.append(err)
    count += 1

    if count == out_freq:
        tol = np.mean(errors[-out_freq:])
        print tol
        count = 0
        if tol < 0.005:
            break


n_tests = 100
correct = 0
count = 0
x_test, y_test = generate_xor(length=n_tests)
for xor_in, xor_out in zip(x_test, y_test):
    output = sess.run([xor_nn], feed_dict={x:xor_in.reshape(1,2)})[0]
    guess = int(output[0][0])
    truth = int(xor_out)
    if guess == truth:
        correct += 1
    count += 1
    print "Model %d : Truth %d - Pass Rate %.2f" % (int(guess), int(xor_out), float(correct*100.0)/float(count))

However, I can't get the code to reliably converge. I have tried varying the size of the hidden layer, using different optimizers / step sizes and different initializations of the weights and biases.

I'm clearly making an elemental error. If anyone could help I'd be grateful.

EDIT:

Thanks to Prem and Alexander Svetkin I managed to spot my errors. Firstly I wasn't rounding the outputs when I cast them to ints, a schoolboy mistake. Secondly I had a relu on the output layer which wasn't needed - a copy and paste mistake. Thirdly relu is indeed a bad choice of activation function for this task, using a sigmoid function works much better.

So this:

hidden_layer = tf.nn.relu(tf.matmul(x,W)+b)
output = tf.nn.relu(tf.matmul(hidden_layer, W2)+b2)

becomes this:

hidden_layer = tf.nn.sigmoid(tf.matmul(x,W)+b)
output = tf.matmul(hidden_layer, W2)+b2 

and this:

guess = int(output[0][0])

becomes this:

guess = int(output[0][0]+0.5)

Upvotes: 2

Views: 218

Answers (2)

Alex Svetkin
Alex Svetkin

Reputation: 1409

  1. ReLU just isn't right activation function for binary classification task, use something different, like sigmoid function.
  2. Pay attention to your float output values. 0.99 should mean 1 or 0? Use rounding.

Upvotes: 1

Prem
Prem

Reputation: 11955

Shouldn't you only return the activation function of output layer instead of relu?

output = tf.matmul(hidden_layer, W2) + b2

Upvotes: 1

Related Questions