Reputation: 75
import tensorflow as tf
# H(x) = Wx + b
W = tf.Variable(tf.random_normal([1],name='weight'))
b = tf.Variable(tf.random_normal([1],name='bias'))
X = tf.placeholder(tf.float32)
Y = tf.placeholder(tf.float32)
hypothesis = X * W + b
cost = tf.reduce_mean(tf.square(hypothesis - Y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
train = optimizer.minimize(cost)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
Weights = []
for step in range(100):
sess.run([cost,hypothesis,train], feed_dict={X:x_col[0],Y:y_col[0]})
if step % 99 ==0:
print(step, sess.run(cost), sess.run(W), sess.run(b))
This is the code I have. When I enter x_col[0]
in Python Shell I get array([ 3., 5., 73., 33.], dtype=float32)
and for y_col[0]
, I get array([ 3., 5., 73., 33.])
.
So I believe the code should work giving cost of 0 and W of 1 and 0 for b. But this error comes up. I don't know how I can fix this problem
For your information, for sess.run([cost,hypothesis,train], feed_dict={X:x_col[0],Y:y_col[0]})
I get [960446.13, array([ 76.92639923, 127.70278168, 1854.09997559, 838.57220459], dtype=float32), None]
.
Upvotes: 1
Views: 6968
Reputation: 185
I am using google colab and I faced the same problem. I solved it by pressing the "runtime" option and in that I pressed "restart and run all" !
This restarted all of the code in google colab and the error no longer showed !
Upvotes: 0
Reputation: 825
Check all the placeholders that you defined earlier have been included/fed into the feed_dict. This is because during computation of your graph, the placeholders are created in memory, tensorflow will look if each one of them have been assigned real values inside the feed_dict, if not then it pulls such an error - at least for my case.
Upvotes: 0
Reputation: 4811
In your print
statement
print(step, sess.run(cost), sess.run(W), sess.run(b))
you are using sess.run(cost)
, but cost is dependent upon X
and Y
, whose values you should provide as they are placeholders. So, you'll need to provide that in feed_dict
as
print(step, sess.run(cost, feed_dict={X: some_x_value, Y: some_y_value}), sess.run(W), sess.run(b))
Upvotes: 1
Reputation: 53768
@layog's answer is right. Just want to show you the code you should use:
for step in range(100):
cost_val, W_val, b_val, _ = sess.run([cost, W, b, train],
feed_dict={X:x_col[0],Y:y_col[0]})
if step % 99 ==0:
print(step, cost_val, W_val, b_val)
It's more efficient to run the training op and compute tensor values in one shot (note that you don't have to specify hypothesis
). If you want to explicitly compute any tensor, you'll have to pass the placeholders too:
sess.run(cost, feed_dict={X:x_col[0],Y:y_col[0]})
Upvotes: 1
Reputation: 1461
In TensorFlow you define a computational graph that is executed with the sess.run()
statement. As part of that graph the cost
operation is defined by placeholder X
and Y
. To compute the cost
you have to feed a value for X
and Y
.
In your print
statement you call sess.run(cost)
without feeding X
and Y
. That is the reason for the error.
But you already executed the graph. Just store the resulting values:
C, H, _ = sess.run([cost,hypothesis,train], feed_dict={X:x_col[0],Y:y_col[0]})
and print results for cost C
and hypothesis H
Upvotes: 0