Samit
Samit

Reputation: 31

tensorflow sess.run returns List instead of float32 causing TypeError: unsupported operand type(s) for +=: 'float' and 'list'

New to coding. Ran into a strange problem. Could not find any good answer on Stackoverflow or internet which explains or provides a way to avoid the error. Tensorflow sess.run returns a list when the original variable was of the float32 type. These are the controlling lines:

accuracy = tf.reduce_sum(tf.cast(correct_preds, tf.float32))
accuracy_batch = sess.run([accuracy], feed_dict={images_pl: image, labels_pl: label})

This leads to a type error downstream on this line:

total_correct_preds += accuracy_batch

With following error:

TypeError: unsupported operand type(s) for +=: 'float' and 'list'

Complete code here:

    import os
    os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
    import numpy as np
    import tensorflow as tf
    from tensorflow.examples.tutorials.mnist import input_data
    import time

    learning_rate = 0.01
    batch_size = 32
    n_epochs = 10

    mnist = input_data.read_data_sets('data/mnist', one_hot=True)
    images_pl = tf.placeholder(tf.float32, shape=[1,784], name="images_pl")
    labels_pl = tf.placeholder(tf.float32, shape=[1,10], name="labels_pl")
    w = tf.Variable(tf.zeros([784, 10]), dtype=tf.float32, name="Weights")
    b = tf.Variable(tf.zeros([1, 10]), dtype=tf.float32, name="Bias")

    logits = tf.matmul(images_pl, w) + b
    loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels_pl)
    train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)

    n_trainset = len(mnist.train.images)
    n_testset = len(mnist.test.images)

    # test the model
    preds = tf.nn.softmax(logits)
    correct_preds = tf.equal(tf.argmax(preds, 1), tf.argmax(labels_pl, 1))
    # accuracy = tf.cast(correct_preds, tf.float32)
    accuracy = tf.reduce_sum(tf.cast(correct_preds, tf.float32))  # need numpy.count_nonzero(boolarr) :(
    print('accuracy dtype',accuracy.dtype)

    with tf.Session() as sess:
        start_time = time.time()
        sess.run(tf.global_variables_initializer())
        sess.run(tf.local_variables_initializer())
        #n_batches = int(mnist.train.num_examples/batch_size)
        for i in range(n_epochs): # train the model n_epochs times
            total_loss = 0
    #       for j in range(n_trainset):
            for j in range(5):
                image = np.reshape(mnist.train.images[j], [1,784])
                label = np.reshape(mnist.train.labels[j], [1,10])
                _, loss_curr = sess.run([train_op, loss], feed_dict={images_pl: image, labels_pl: label})
                total_loss += loss_curr
                #print(loss_curr)
            print('Average loss epoch {0}: {1}'.format(i, total_loss))

        print('Total time: {0} seconds'.format(time.time() - start_time))
        print('Optimization Finished!') # should be around 0.35 after 25 epochs

        #n_batches = int(mnist.test.num_examples/batch_size)
        total_correct_preds = 0.0

        for k in range(10):
            image = np.reshape(mnist.test.images[k], [1, 784])
            label = np.reshape(mnist.test.labels[k], [1, 10])
            accuracy_batch = sess.run([accuracy], feed_dict={images_pl: image, labels_pl: label})
            #print(accuracy_batch.dtype)
            #accuracy_batch = tf.cast(accuracy_batch, tf.float32)
            print(accuracy_batch)
            print('accuracy_batch',accuracy_batch)
            total_correct_preds += accuracy_batch

        print('total_correct_preds',total_correct_preds)

This is strange because it follows the same structure as the total_loss / loss_curr structure in the training part which works fine. Following is the full output log:

Extracting data/mnist/train-images-idx3-ubyte.gz
..
Extracting data/mnist/t10k-labels-idx1-ubyte.gz
('accuracy dtype', tf.float32)
Average loss epoch 0: [ 11.81690311]
Average loss epoch 1: [ 8.99989128]
...
Average loss epoch 9: [ 1.64795518]
Total time: 0.04798579216 seconds
Optimization Finished!
[0.0]
('accuracy_batch', [0.0])
Traceback (most recent call last):
  File "CS20SI/Code/03_logistic_regression_mnist_starter.py", line 62, in <module>
    total_correct_preds += accuracy_batch
TypeError: unsupported operand type(s) for +=: 'float' and 'list'

Can someone help explain why sess.run in returning a List dtype when the original variable is of dtype float32?

Upvotes: 3

Views: 4224

Answers (1)

velikodniy
velikodniy

Reputation: 863

sess.run accepts a list of the graph elements and returns a list of their values. In your case the list of the graph elements is [accuracy], so sess.run returns a list with a single element.

For convenience you can write

(accuracy_batch, ) = sess.run([accuracy], feed_dict={images_pl: image, labels_pl: label})

or even

accuracy_batch, = sess.run([accuracy], feed_dict={images_pl: image, labels_pl: label})

Python will match the list produced by sess.run and the tuple, so you will get float32 in accuracy_batch.

Also you can pass single graph element:

accuracy_batch = sess.run(accuracy, feed_dict={images_pl: image, labels_pl: label})

Upvotes: 4

Related Questions