fast_cen
fast_cen

Reputation: 1377

Why does this variant of the XOR function not always converge?

I'm trying to implement a simple XOR gate in TensorFlow. My problem is that my function doesn't always converge.

If I'm not wrong, XOR space don't have local minimum, so I don't understand why this would happen.

--

I saw this answer: https://stackoverflow.com/a/33750395/2131871, and it always converge. I have taken the code from the answer by @mrry and slightly modified it, so that instead of having two output nodes, it only have one, I used tanh activation function instead of relu & softmax and adapted the cross_entropy function.

import math
import tensorflow as tf
import numpy as np

HIDDEN_NODES = 10

x = tf.placeholder(tf.float32, [None, 2])
W_hidden = tf.Variable(tf.truncated_normal([2, HIDDEN_NODES], stddev=1./math.sqrt(2)))
b_hidden = tf.Variable(tf.zeros([HIDDEN_NODES]))
hidden = tf.tanh(tf.matmul(x, W_hidden) + b_hidden)

W_logits = tf.Variable(tf.truncated_normal([HIDDEN_NODES, 1], stddev=1./math.sqrt(HIDDEN_NODES)))
b_logits = tf.Variable(tf.zeros([1]))
logits = tf.matmul(hidden, W_logits) + b_logits
y = tf.tanh(logits)

y_input = tf.placeholder(tf.float32, [None, 1])

cross_entropy = tf.abs(tf.sub(y_input, y))
loss = tf.reduce_mean(cross_entropy)

train_op = tf.train.GradientDescentOptimizer(0.2).minimize(loss)

xTrain = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
yTrain = np.array([[-1], [1], [1], [-1]])

for d in xrange(20):
    init_op = tf.initialize_all_variables()

    sess = tf.Session()
    sess.run(init_op)

    for i in xrange(500):
      _, loss_val = sess.run([train_op, loss], feed_dict={x: xTrain, y_input: yTrain})

      if i % 10 == 0:
        print "Step:", i, "Current loss:", loss_val
        for x_input in [[0, 0], [0, 1], [1, 0], [1, 1]]:
          print x_input, sess.run(y, feed_dict={x: [x_input]})
    assert loss_val < 0.01

Can anybody explain me why my solution sometimes fail to converge? Thanks.

Upvotes: 2

Views: 303

Answers (1)

dga
dga

Reputation: 21917

The way you're computing your error is letting your network fall into local minima too easily. I suspect it's because the l1 norm of the xor function has too many equal-weight poor solutions when moving from an existing solution. (But I'm not positive - an ML expert can give you a more precise answer here. I'm just a systems schmoo.)

Easy fix: replace these lines:

cross_entropy = tf.abs(tf.sub(y_input, y))
loss = tf.reduce_mean(cross_entropy)

with:

loss = tf.nn.l2_loss(y_input - y)

Upvotes: 3

Related Questions