Reputation: 522
I have a time series model in TF. It's basically a simple auto-regressive model.
The original y
is a vector of length 100 (n
).
I get the float is not tensor error (as per subject). I only get it at the second instance though.
LR = .01
STEPS = 100
def Net(x, w, b):
# x has 2 previous values
x = [x[-1], x[-2], x[-1] - x[-2]]
x = tf.reshape(x, [1, 3])
x = tf.add(tf.matmul(x, w[0]), b[0])
pred = tf.add(tf.matmul(x, w[1]), b[1])
return pred
y_data = y - np.mean(y)
x = tf.placeholder(tf.float32, [2], name='x')
y = tf.placeholder(tf.float32, [1], name='y')
w = [tf.Variable(tf.random_normal([3, 3])), tf.Variable(tf.random_normal([3, 1]))]
b = [tf.Variable(tf.random_normal([1])), tf.Variable(tf.random_normal([1]))]
pred = Net(x, w, b)
cost = tf.sqrt(tf.reduce_mean(tf.square(tf.subtract(pred, y))))
optimizer = tf.train.AdamOptimizer(learning_rate=LR).minimize(cost)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for step in range(STEPS):
# random samples of data
ts = np.random.choice(np.arange(2, n), int(n * .5), replace=False)
for t in ts:
x_data = [y_data[t - 2], y_data[t - 1]]
y_data_cur = [y_data[t]]
print(x_data, y_data_cur, x, y, pred)
_, cost, p = sess.run([optimizer, cost, pred], feed_dict={x: x_data, y: y_data_cur})
print(cost, p)
if step % 10 == 0:
print(step, cost)
Upvotes: 1
Views: 3532
Reputation: 59681
When you run your model:
_, cost, p = sess.run([optimizer, cost, pred], feed_dict={x: x_data, y: y_data_cur})
You are overwriting the cost
variable, which used to hold the TensorFlow tensor for the cost, with its evaluated value, so the next iteration fails. Just change the name of the variable:
_, cost_val, p = sess.run([optimizer, cost, pred], feed_dict={x: x_data, y: y_data_cur})
And of course replace cost
with cost_val
in the print
statements.
Upvotes: 4