tianzhi0549
tianzhi0549

Reputation: 479

How does tf.control_dependencies work for operations defined elsewhere?

In the newest tensorflow code, the line (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/training/python/training/training.py#L428) intends to make sure that total_loss is computed after finishing update_ops.

However, my question is that the total_loss is defined elsewhere. There is just a reference to total_loss, which should not make the control_dependencies effective.

How does it ensure that total_loss are computed after finishing update_ops? I mean the total_loss might be computed with stale values of variables before update_ops.

Upvotes: 2

Views: 678

Answers (1)

Allen Lavoie
Allen Lavoie

Reputation: 5808

It indeed won't affect the originally defined op (running that won't trigger update ops), but the new op (confusingly also referred to by the Python variable total_loss) returned by with_dependencies has control dependencies on the update ops and so will only run once they've run. The new op with control dependencies is passed as the loss to compute_gradients and so gets tied into the train op.

Upvotes: 1

Related Questions