Bosen
Bosen

Reputation: 941

Why must this tf.placeholder be a float?

Why does x needs to be a float? Why cant it be int since I am passing in a list of type int?

Code:

x = tf.placeholder(tf.float32, shape=[None, 1])  # Why must this be a float?
y = tf.placeholder(tf.int32, shape=[None, 2])

with tf.name_scope("network"):
    layer1 = tf.layers.dense(x, 100, activation=tf.nn.relu, name="hidden_layer")
    output = tf.layers.dense(layer1, 2, name="output_layer")

with tf.name_scope("loss"):
    xentropy = tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=output)
    loss = tf.reduce_mean(xentropy, name="loss")

with tf.name_scope("train"):
    optimizer = tf.train.AdamOptimizer()
    training_op = optimizer.minimize(loss)

with tf.name_scope("eval"):
    with tf.Session() as sess:
        for i in range(1, 50):
            sess.run(tf.global_variables_initializer())
            saver = tf.train.Saver()
            sess.run(training_op, feed_dict={x: np.array(train_data).reshape([-1, 1]), y: label})
            if i % 10 == 0:
                saver.save(sess, "saved_models/testing")
                print "Saved"

When I change it to tf.int32, it gives the following error:

TypeError: Value passed to parameter 'features' has DataType int32 not in list of allowed values: float16, float32, float64

I can provide more code if needed.

Upvotes: 2

Views: 709

Answers (1)

P-Gn
P-Gn

Reputation: 24651

This is due to tf.nn.softmax_cross_entropy_with_logits:

logits and labels must have the same shape [batch_size, num_classes] and the same dtype (either float16, float32, or float64).

I suppose you could compute a loss with integer inputs. However, most of the time, this loss is minimized by gradient descent -- as you do -- which means inputs needs to encode real numbers to get arbitrary updates.

The thing is that tf.layers.dense won't change the type of your input. So it will produce an integer output it its input is an integer. (At least if the activation is compatible with integers, such as relu -- a sigmoid would raise an error).

What you probably wanted to do is provide integer inputs then do all computations in say tf.float32. To do this, cast your input first before providing it to dense:

layer1 = tf.layers.dense(tf.to_float(x), 100, activation=tf.nn.relu, name="hidden_layer")

Upvotes: 3

Related Questions