user1955534
user1955534

Reputation: 53

TensorFlow Trained Model Predicts Always Zero

I have one simple TensorFlow model and accuracy for that is 1. But when I try to predict some new inputs it always returns Zero(0).

import numpy as np
import tensorflow as tf

sess = tf.InteractiveSession()

# generate data

np.random.seed(10)

#inputs = np.random.uniform(low=1.2, high=1.5, size=[5000, 150]).astype('float32')

inputs = np.random.randint(low=50, high=500, size=[5000, 150])


label = np.random.uniform(low=1.3, high=1.4, size=[5000, 1])
# reverse_label = 1 - label
reverse_label = np.random.uniform(
    low=1.3, high=1.4, size=[5000, 1])
reverse_label1 = np.random.randint(
    low=80, high=140, size=[5000, 1])
#labels = np.append(label, reverse_label, 1)
#labels = np.append(labels, reverse_label1, 1)
labels = reverse_label1
print(inputs)
print(labels)
# parameters

learn_rate = 0.001
epochs = 100
n_input = 150
n_hidden = 15
n_output = 1

# set weights/biases

x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_output])


b0 = tf.Variable(tf.truncated_normal([n_hidden], stddev=0.2, seed=0))
b1 = tf.Variable(tf.truncated_normal([n_output], stddev=0.2, seed=0))

w0 = tf.Variable(tf.truncated_normal([n_input, n_hidden], stddev=0.2, seed=0))
w1 = tf.Variable(tf.truncated_normal([n_hidden, n_output], stddev=0.2, seed=0))


# step function


def returnPred(x, w0, w1, b0, b1):

    z1 = tf.add(tf.matmul(x, w0), b0)
    a2 = tf.nn.relu(z1)

    z2 = tf.add(tf.matmul(a2, w1), b1)
    h = tf.nn.relu(z2)

    return h  # return the first response vector from the


y_ = returnPred(x, w0, w1, b0, b1)  # predict operation

loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
    logits=y_, labels=y))  # calculate loss between prediction and actual
model = tf.train.AdamOptimizer(learning_rate=learn_rate).minimize(
    loss)  # apply gradient descent based on loss


init = tf.global_variables_initializer()
tf.Session = sess
sess.run(init)  # initialize graph

for step in range(0, epochs):
    sess.run([model, loss], feed_dict={x: inputs, y: labels})  # train model



correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: inputs, y: labels}))  # print accuracy


inp = np.random.randint(low=50, high=500, size=[5, 150])


print(sess.run(tf.argmax(y_, 1), feed_dict={x: inp})) # predict some new inputs

All functions are working properly and my problem is with the latest line of code. I tried only "y_" instead "tf.argmax(y_, 1)" but not worked too. How can I fix that? Regards,

Upvotes: 3

Views: 698

Answers (1)

Prasad
Prasad

Reputation: 6034

There are multiple mistakes in your code.

Starting with this lines of code:

correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: inputs, y: labels}))  # print accuracy

You are performing linear regression but you are checking accuracy with that of logistic regression methodology. If you want to see how your linear regression network is performing, print the loss. Ensure that your loss is decreasing after each epoch of training.

If you look into that accuracy code, run the following code:

print(y_.get_shape())    # Outputs (?, 1)

There is only one input and both of your function tf.argmax(y,1) and tf.argmax(y_,1) will always return [0,0,..]. So as a result your accuracy will be always 1.0. Delete those three lines of code.

Next, to get the outputs, just run the following code:

print(sess.run(y_, feed_dict={x: inp}))

But since your data is random, don't expect good set of outputs.

Upvotes: 1

Related Questions