Stefano Kira
Stefano Kira

Reputation: 175

Changing tf.Variable value in Estimator SessionRunHook

I have a tf.Estimator whose model_fn contains a tf.Variable initialized to 1.0. I would like to change the variable value at every epoch based on the accuracy on the dev set. I implemented a SessionRunHook to achieve this, but when I try to change the value I receive the following error:

raise RuntimeError("Graph is finalized and cannot be modified.")

This is the code for the Hook:

    class DynamicWeightingHook(tf.train.SessionRunHook):
        def __init__(self, epoch_size, gamma_value):
            self.gamma = gamma_value
            self.epoch_size = epoch_size
            self.steps = 0

        def before_run(self, run_context):
            self.steps += 1

        def after_run(self, run_context, run_values):
            if self.steps % epoch_size == 0:  # epoch 
                with tf.variable_scope("lambda_scope", reuse=True):
                    lambda_tensor = tf.get_variable("lambda_value")
                    tf.assign(lambda_tensor, self.gamma_value)
                    self.gamma_value += 0.1

I understand the Graph is finalized when I run the hook, but I would like to know if there's any other way to change a variable value in the model_fn graph with the Estimator API during training.

Upvotes: 1

Views: 1171

Answers (1)

xdurch0
xdurch0

Reputation: 10475

The way your hook is set up right now you are essentially trying to create new variables/ops after each session run. Instead, you should define the tf.assign op beforehand and pass it to the hook so that it can run the op itself if necessary, or define the assign op in the hook's __init__. You can access the session inside after_run via the run_context argument. So something like

class DynamicWeightingHook(tf.train.SessionRunHook):
    def __init__(self, epoch_size, gamma_value, lambda_tensor):
        self.gamma = gamma_value
        self.epoch_size = epoch_size
        self.steps = 0
        self.update_op = tf.assign(lambda_tensor, self.gamma_placeholder)

    def before_run(self, run_context):
        self.steps += 1

    def after_run(self, run_context, run_values):
        if self.steps % epoch_size == 0:  # epoch 
            run_context.session.run(self.update_op)
            self.gamma += 0.1

There are some caveats here. For one, I'm not sure whether you can do tf.assign with a Python integer like this, i.e. whether it will update properly once gamma is changed. If this doesn't work, you could try this:

class DynamicWeightingHook(tf.train.SessionRunHook):
    def __init__(self, epoch_size, gamma_value, lambda_tensor):
        self.gamma = gamma_value
        self.epoch_size = epoch_size
        self.steps = 0
        self.gamma_placeholder = tf.placeholder(tf.float32, [])
        self.update_op = tf.assign(lambda_tensor, self.gamma_placeholder)

    def before_run(self, run_context):
        self.steps += 1

    def after_run(self, run_context, run_values):
        if self.steps % epoch_size == 0:  # epoch 
            run_context.session.run(self.update_op, feed_dict={self.gamma_placeholder: self.gamma})
            self.gamma += 0.1

Here, we use an additional placeholder to be able to pass the "current" gamma to the assign op at all times.

Second, since the hooks needs access to the variables, you would need to define the hook inside the model function. You can pass such hooks to the training process in the EstimatorSpec (see here).

Upvotes: 2

Related Questions