Johnny000
Johnny000

Reputation: 2104

Tensorflow predicts always the same result

I'm trying to get the TensorFlow example running with my own data, but somehow the classifier always picks the same class for every test example. The input data is always shuffled prior. I have about 4000 images as a training set and 500 images as a testing set.

The result I get looks like:

Result: [[ 1.  0.]] Actually: [ 1.  0.] 
Result: [[ 1.  0.]] Actually: [ 0.  1.] 
Result: [[ 1.  0.]] Actually: [ 1.  0.] 
Result: [[ 1.  0.]] Actually: [ 1.  0.] 
Result: [[ 1.  0.]] Actually: [ 0.  1.] 
Result: [[ 1.  0.]] Actually: [ 0.  1.]
...

The right side remains for all 500 images [1. 0.]. The classification is binary so I just have two labels.

Here is my source code:

import tensorflow as tf
import input_data as id

test_images, test_labels = id.read_images_from_csv(
    "/home/johnny/Desktop/tensorflow-examples/46-model.csv")

train_images = test_images[:4000]
train_labels = test_labels[:4000]
test_images = test_images[4000:]
test_labels = test_labels[4000:]

print len(train_images)
print len(test_images)

pixels = 200 * 200
labels = 2

sess = tf.InteractiveSession()

# Create the model
x = tf.placeholder(tf.float32, [None, pixels])
W = tf.Variable(tf.zeros([pixels, labels]))
b = tf.Variable(tf.zeros([labels]))
y_prime = tf.matmul(x, W) + b
y = tf.nn.softmax(y_prime)

# Define loss and optimizer
y_ = tf.placeholder(tf.float32, [None, labels])
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(y_prime, y_)
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)

# Train
tf.initialize_all_variables().run()
for i in range(10):
    res = train_step.run({x: train_images, y_: train_labels})
# Test trained model
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval({x: test_images, y_: test_labels}))

for i in range(0, len(test_images)):
    res = sess.run(y, {x: [test_images[i]]})
    print("Result: " + str(res) + " Actually: " + str(test_labels[i]))

Am I missing a point?

Upvotes: 4

Views: 10092

Answers (2)

user2498105
user2498105

Reputation: 71

The other problem you may be having is a Class imbalance. If you have one class that greatly outweighs the other your function may be converging to that value. Try balancing the classes in your training sample as well as using smaller batches. For example if your labels are binary then make sure there is an equal amount of zeros and one labels in your training sample.

Upvotes: 4

mrry
mrry

Reputation: 126154

There are three potential issues in your code:

  1. The weights, W, are initialized to zero. This question from stats.stackexchange.com has a good discussion of why this can lead to poor training outcomes (such as getting stuck in a local minimum). Instead, you should initialize them randomly, for example as follows:

    W = tf.Variable(tf.truncated_normal([pixels, labels],
                                        stddev=1./math.sqrt(pixels)))
    
  2. The cross_entropy should be aggregated to a single, scalar value before minimizing it, using for example tf.reduce_mean():

    cross_entropy = tf.reduce_mean(
        tf.nn.softmax_cross_entropy_with_logits(y_prime, y_))
    
  3. You may get faster convergence if you train on mini-batches (or even single examples) rather than training on the entire dataset at once:

    for i in range(10):
            for j in range(4000):
                res = train_step.run({x: train_images[j:j+1],
                                      y_: train_labels[j:j+1]})
    

Upvotes: 14

Related Questions