wspeirs
wspeirs

Reputation: 1413

Tensor Flow softmax Regression Always Predicts 1

I have the following code based on the MNIST example. It is modified in two ways:

1) I'm not using a one-hot-vector, so I simply use tf.equal(y, y_)

2) My results are binary: either 0 or 1

import tensorflow as tf
import numpy as np

# get the data
train_data, train_results = get_data(2000, 2014)
test_data, test_results = get_data(2014, 2015)

# setup a session
sess = tf.Session()

x_len = len(train_data[0])
y_len = len(train_results[0])

# make placeholders for inputs and outputs
x = tf.placeholder(tf.float32, shape=[None, x_len])
y_ = tf.placeholder(tf.float32, shape=[None, y_len])

# create the weights and bias
W = tf.Variable(tf.zeros([x_len, 1]))
b = tf.Variable(tf.zeros([1]))

# initialize everything
sess.run(tf.initialize_all_variables())

# create the "equation" for y in terms of x
y_prime = tf.matmul(x, W) + b
y = tf.nn.softmax(y_prime)

# construct the error function
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(y_prime, y_)

# setup the training algorithm
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)

# train the thing
for i in range(1000):
    rand_rows = np.random.choice(train_data.shape[0], 100, replace=False)
    _, w_out, b_out, ce_out = sess.run([train_step, W, b, cross_entropy], feed_dict={x: train_data[rand_rows, :], y_: train_results[rand_rows, :]})

    print("%d: %s %s %s" % (i, str(w_out), str(b_out), str(ce_out)))

# compute how many times it was correct
correct_prediction = tf.equal(y, y_)

# find the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

print(sess.run(accuracy, feed_dict={x: test_data, y_: test_results}))

for i in range(0, len(test_data)):
    res = sess.run(y, {x: [test_data[i]]})

    print("RES: " + str(res) + " ACT: " + str(test_results[i]))

The accuracy is always 0.5 (because my test data has about as many 1s as 0s). The values of W and b always seem to increase, probably because the values of cross_entropy are always a vector of all zeros.

When I try and use this model for prediction, the predictions are always 1:

RES: [[ 1.]] ACT: [ 0.]
RES: [[ 1.]] ACT: [ 1.]
RES: [[ 1.]] ACT: [ 0.]
RES: [[ 1.]] ACT: [ 1.]
RES: [[ 1.]] ACT: [ 0.]
RES: [[ 1.]] ACT: [ 1.]
RES: [[ 1.]] ACT: [ 0.]
RES: [[ 1.]] ACT: [ 0.]
RES: [[ 1.]] ACT: [ 1.]
RES: [[ 1.]] ACT: [ 0.]
RES: [[ 1.]] ACT: [ 1.]

What am I doing wrong here?

Upvotes: 1

Views: 1621

Answers (1)

Ian Goodfellow
Ian Goodfellow

Reputation: 2604

You seem to be predicting a single scalar, rather than a vector. The softmax op produces a vector-valued prediction for each example. This vector must always sum to 1. When the vector only contains one element, that element must always be 1. If you want to use a softmax for this problem, you could use [1, 0] as the output target where you are currently using [0] and use [0, 1] where you are currently using [1]. Another option is you could keep using just one number, but change the output layer to sigmoid instead of softmax, and change the cost function to be the sigmoid-based cost function as well.

Upvotes: 5

Related Questions