Reputation: 21
Basically, I have a function that expects a tensor x and two placeholders z and c.
def error_robust(x,z,c):
zz = tf.reshape(z, [-1, 28, 28, 1])
var = tf.reduce_mean(x-zz)
out = tf.cond( tf.abs(var) <= c, lambda: (c*c/6.0)*(1 - tf.pow(1-tf.pow(var/c,2),3)), lambda: tf.Variable(c*c/6.0) )
return out
I define the placeholders and tensors that I am gonna use:
# TENSORFLOW PLACEHOLDERS
sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))
flat_mnist_data = tf.placeholder(tf.float32, [None, 28*28])
dropout_keep_prob = tf.placeholder(tf.float32)
param_robust = tf.placeholder(tf.float32, shape=())
Calling the defined function does not generate any errors:
error_r = error_robust(layer1_b.reconstruction, flat_mnist_data, param_robust)
This generates an error:
sess.run(tf.global_variables_initializer())
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder' with dtype float [[Node: Placeholder = Placeholderdtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/gpu:0"]]
I don't really understand why it happens. Any ideas on how to solve this one?
Upvotes: 1
Views: 311
Reputation: 21
Ok, I got it. I was first expecting c to be a simple scalar. So I was using tf.Variable as the second argument of the tf.cond. Updating the error_robust function solves it:
def error_robust(x,z,c):
zz = tf.reshape(z, [-1, 28, 28, 1])
var = tf.reduce_mean(x-zz)
out = tf.cond( tf.abs(var) <= c, lambda: (c*c/6.0)*(1 - tf.pow(1-tf.pow(var/c,2),3)), lambda: c*c/6.0 )
return out
Upvotes: 0