Markus
Markus

Reputation: 341

Getting an Error in TensorFlow

I am following along on a TensorFlow tutorial on youtube and I have ran into an error:

This is my code:

import tensorflow as tf

from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/",one_hot=True)

n_nodes_hl1 = 500
n_nodes_hl2 = 500
n_nodes_hl3 = 500

n_classes = 10
batch_size = 100

# height x width
x = tf.placeholder("float")
y = tf.placeholder("float")

def neural_network_model(data):
    hidden_1_layer = {"weights":tf.Variable(tf.random_normal([784,n_nodes_hl1])),"biases":tf.Variable(tf.random_normal([n_nodes_hl1]))}

    hidden_2_layer = {"weights":tf.Variable(tf.random_normal([n_nodes_hl1,n_nodes_hl2])),"biases":tf.Variable(tf.random_normal([n_nodes_hl2]))}

    hidden_3_layer = {"weights":tf.Variable(tf.random_normal([n_nodes_hl2,n_nodes_hl3])),"biases":tf.Variable(tf.random_normal([n_nodes_hl3]))}

    output_layer   = {"weights":tf.Variable(tf.random_normal([n_nodes_hl3,n_classes])),"biases":tf.Variable(tf.random_normal([n_classes]))}

    # (input_data * weights + biases
    l1 = tf.add(tf.matmul(data,hidden_1_layer["weights"]), hidden_1_layer["biases"])
    l1 = tf.nn.relu(l1)

    l2 = tf.add(tf.matmul(l1,hidden_2_layer["weights"]), hidden_2_layer["biases"])
    l2 = tf.nn.relu(l2)

    l3 = tf.add(tf.matmul(l2,hidden_3_layer["weights"]), hidden_3_layer["biases"])
    l3 = tf.nn.relu(l3)

    output = tf.matmul(l3,output_layer["weights"]) + output_layer["biases"]

    return output

def train_neural_network(x):
    prediction = neural_network_model(x)
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=prediction,logits=y))
    optimizer = tf.train.AdamOptimizer().minimize(cost)

    # cycles of 
    hm_epochs = 10

    with tf.Session() as sess:
        sess.run(tf.initialize_all_variables())

        for epoch in range(hm_epoch):
            epoch_loss = 0
            for i in range(int(mnist.train.num.examples/batch_size)):
                epoch_x, epoch_y = mnist.train.next_batch(batch_size)
                i, c = sess.run([optimizer,cost],feed_dict = {x:epoch_x,y:epoch_y})
                epoch_loss += c
            print("Epoch", epoch, "completed out of", hm_epochs, "loss:" , epoch_loss)
        correct = tf.equal(tf.argmax(prediction,1), tf.argmax(y1))

        accuracy = tf.reduce_mean(tf,cast(correct, "float"))
        print("Accuracy", accuracy.eval({x:mnist.test.images, y:mnist.test.labels}))



train_neural_network(x)

and, this is the error:

Traceback (most recent call last):
  File "/home/markus/Documents/NN-Tutorial-04.py", line 65, in <module>
    train_neural_network(x)
  File "/home/markus/Documents/NN-Tutorial-04.py", line 43, in train_neural_network
    optimizer = tf.train.AdamOptimizer().minimize(cost)
  File "/home/markus/.local/lib/python3.5/site-packages/tensorflow/python/training/optimizer.py", line 322, in minimize
    ([str(v) for _, v in grads_and_vars], loss))
ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables ["<tf.Variable 'Variable:0' shape=(784, 500) dtype=float32_ref>", "<tf.Variable 'Variable_1:0' shape=(500,) dtype=float32_ref>", "<tf.Variable 'Variable_2:0' shape=(500, 500) dtype=float32_ref>", "<tf.Variable 'Variable_3:0' shape=(500,) dtype=float32_ref>", "<tf.Variable 'Variable_4:0' shape=(500, 500) dtype=float32_ref>", "<tf.Variable 'Variable_5:0' shape=(500,) dtype=float32_ref>", "<tf.Variable 'Variable_6:0' shape=(500, 10) dtype=float32_ref>", "<tf.Variable 'Variable_7:0' shape=(10,) dtype=float32_ref>"] and loss Tensor("Mean:0", dtype=float32).

Any help would be appreciated, this was the video I was following (https://www.youtube.com/watch?v=PwAGxqrXSCs).

IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII

Upvotes: 0

Views: 680

Answers (1)

Drop
Drop

Reputation: 13003

You've confused labels and logits in the call to tf.nn.softmax_cross_entropy_with_logits(). This way tensorflow will probably try to assign gradients to labels (I don't think it should be possible, because these are the placeholders), and the variables in the graph will not get any gradients. That is incorrect.

Instead of

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
       labels=prediction,logits=y)
)    

you should write

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
       labels=y,logits=prediction)
)

Additionally, there are some syntax errors and you also overwrite the i variable (which is the counter of the outer loop), in the inner loop with the returned optimizer. For now you don't use i, but if you would, it would lead to hard to diagnose bugs. Just rename the returned variable into _ (python convention for unused return values):

_, c = sess.run([optimizer,cost],feed_dict = {x:epoch_x,y:epoch_y})

Upvotes: 1

Related Questions