Reputation: 724
import tensorflow as tf
# Model parameters
A = tf.Variable([.3], dtype=tf.float32)
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
# Model input and output
x = tf.placeholder(tf.float32)
q_model = A * (x**2) + W * x + b
y = tf.placeholder(tf.float32)
# loss
loss = tf.reduce_sum(tf.square(q_model - y)) # sum of the squares
# optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
# training data
x_train = [0, 1, 2, 3, 4]
y_train = [0, 1, 4, 9, 16]
# training loop
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init) # reset values to wrong
for i in range(1000):
sess.run(train, {x: x_train, y: y_train})
# evaluate training accuracy
curr_A, curr_W, curr_b, curr_loss = sess.run([A, W, b, loss], {x: x_train, y: y_train})
print("A: %s W: %s b: %s loss: %s"%(curr_A, curr_W, curr_b, curr_loss))
On their website, tf gives model code to perform linear regression. However, I wanted to play around to see if I could also get it to do quadratic regression. To do so, I added a tf.Variable A, put it into the model and then modified the output to tell me what it got as the value.
Here are the results:
A: [ nan] W: [ nan] b: [ nan] loss: nan
What do y'all think is the issue here? Is it between the chair and the keyboard?
Upvotes: 1
Views: 3822
Reputation: 2629
If you print the values of A
, W
, and b
for each iteration, you will see that they are alternating (i.e. positive and negative values following each other). This is often due to a large learning rate. In your example, you should be able to fix this behaviour by reducing the learning rate to about 0.001
:
optimizer = tf.train.GradientDescentOptimizer(0.001)
With this learning rate, I achieved a decreasing loss, while A
tended to 1 and W
and b
tended to zero, as expected.
A: [ 0.7536] W: [ 0.42800003] b: [-0.26100001] loss: 7.86113
A: [ 0.8581112] W: [ 0.45682004] b: [-0.252166] loss: 0.584708
A: [ 0.88233441] W: [ 0.46283191] b: [-0.25026742] loss: 0.199126
...
A: [ 0.96852171] W: [ 0.1454313] b: [-0.11387932] loss: 0.0183883
A: [ 0.96855479] W: [ 0.14527865] b: [-0.11376046] loss: 0.0183499
A: [ 0.96858788] W: [ 0.14512616] b: [-0.11364172] loss: 0.0183113
A: [ 0.9686209] W: [ 0.14497384] b: [-0.1135231] loss: 0.0182731
Upvotes: 2