MatthewScarpino
MatthewScarpino

Reputation: 5936

Can't get simple binary classifier to work

I've written a simple binary classifier using TensorFlow. But the only result I get for the optimized variables are NaN. Here's the code:

import tensorflow as tf

# Input values
x = tf.range(0., 40.)
y = tf.constant([0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
                 1., 0., 0., 1., 0., 1., 0., 1., 1., 1.,
                 1., 1., 0., 1., 1., 1., 0., 1., 1., 1.,
                 1., 1., 1., 0., 1., 1., 1., 1., 1., 1.])

# Variables
m = tf.Variable(tf.random_normal([]))
b = tf.Variable(tf.random_normal([]))

# Model and cost
model = tf.nn.sigmoid(tf.add(tf.multiply(x, m), b))
cost = -1. * tf.reduce_sum(y * tf.log(model) + (1. - y) * (1. - tf.log(model)))

# Optimizer
learn_rate = 0.05
num_epochs = 20000
optimizer = tf.train.GradientDescentOptimizer(learn_rate).minimize(cost)

# Initialize variables
init = tf.global_variables_initializer()

# Launch session
with tf.Session() as sess:
    sess.run(init)

    # Fit all training data
    for epoch in range(num_epochs):
        sess.run(optimizer)

    # Display results
    print("m =", sess.run(m))
    print("b =", sess.run(b))

I've tried different optimizers, learning rates, and test sizes. But nothing seems to work. Any ideas?

Upvotes: 0

Views: 43

Answers (1)

ml4294
ml4294

Reputation: 2629

You initialize m and b with standard deviation 1, but regarding your data x and y, you can expect m to be significantly smaller than 1. You can initialize b to zero (this is quite popular for bias terms) and m with a much smaller standard deviation (for example 0.0005) and reduce the learning rate at the same time (for example to 0.00000005). You can delay the NaN values changing these values, but they will probably eventually occur, since your data is not well-described by a linear function in my opinion. plot

import tensorflow as tf
import matplotlib.pyplot as plt

# Input values
x = tf.range(0., 40.)
y = tf.constant([0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
                     1., 0., 0., 1., 0., 1., 0., 1., 1., 1.,
                                      1., 1., 0., 1., 1., 1., 0., 1., 1., 1.,
                                                       1., 1., 1., 0., 1., 1.,
                                                       1., 1., 1., 1.])

# Variables
m = tf.Variable(tf.random_normal([], mean=0.0, stddev=0.0005))
b = tf.Variable(tf.zeros([]))

# Model and cost
model = tf.nn.sigmoid(tf.add(tf.multiply(x, m), b))
cost = -1. * tf.reduce_sum(y * tf.log(model) + (1. - y) * (1. - tf.log(model)))

# Optimizer
learn_rate = 0.00000005
num_epochs = 20000
optimizer = tf.train.GradientDescentOptimizer(learn_rate).minimize(cost)

# Initialize variables
init = tf.global_variables_initializer()

# Launch session
with tf.Session() as sess:
    sess.run(init)

    # Fit all training data
    for epoch in range(num_epochs):
        _, xs, ys = sess.run([optimizer, x, y])

    ms = sess.run(m)
    bs = sess.run(b)
    print(ms, bs)
plt.plot(xs,ys)
plt.plot(xs, ms * xs + bs)
plt.savefig('tf_test.png')
plt.show()
plt.clf()

Upvotes: 1

Related Questions