Checking backpropagation gradients

I'm trying to adapt a reinforcement learning script that's coded in pure python into tensorflow.

I designed it and when I started sampling through it I got exactly the same values in forward propagation (for the first samples), but then I backpropagate and gradient values are not the same (not even close).

I'm thinking that it has to do with backprop through the RELU non-linearity but then again I'm not entirely sure.

What's the easiest way to see step by step backpropagation of a network architecture?

Upvotes: 3

Views: 2521

Answers (1)

fabmilo
fabmilo

Reputation: 48330

One way is to print the values of the backpropagation gradients:

optimizer = tf.train.AdamOptimizer() 
variables = tf.trainable_variables()
gradients = optimizer.compute_gradients(cost, variables)

You then can inspect the values of the computed gradients by passing them to the sess.run function

Upvotes: 6

Related Questions