Reputation: 652
In the TensorFlow white paper it is stated that gradients are calculated on the backwards path with the chain rule. I visualised the tutorial "https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/mnist/mnist_softmax.py" in Tensorboard and couldn't find out if this is actually happening.
If there is an edge in the Tensorboard visualisation, can the data flow in both directions? (normally I would expect directed edges)
Upvotes: 0
Views: 597
Reputation: 6367
Inside the "GradientDescent" box you've already seen the two "update_w" and "update_b" boxes, and you're wondering why the arrows only point in from the variables, but none out to them.
Basically, the arrows in tensorboard show dependencies, not how data moves around. A lot of the time they are similar. But it's not like matlab's simulink, if you're familliar with it. You can't have cycles in the graph.
So the arrows say: You can't run "update_w" until you have "w", "learning_rate" and "gradients".
"update_w" does update "w", there just isn't an arrow showing it.
The update doesn't go backwards along the arrow either. Consider:
x = tf.Variable(0,'x')
y = tf.Variable(1,'y')
set_x_eq_y =x.assign(y,name = 'set_x_eq_y')
Here the graph would have an arrow from "y" to "set_x_eq_y". No arrow touching "x".
Upvotes: 1