figs_and_nuts
figs_and_nuts

Reputation: 5743

tf.cond on a variable. FailedPreconditionError in tf.global_variables_initializer()

I am running into FailedPreconditionError error in tf.global_variables_initializer(). I have zeroed-in on the following part of the code to be the culprit:

def __init__(...):
    ...
    self.global_step = tf.get_variable(initializer=tf.zeros_initializer(), trainable=False, shape=(), name='global_step')
    ...
    step_rampup_value = self.step_rampup(self.global_step, self.rampup_length)

def step_rampup(self, global_step, rampup_length):
    result = tf.cond(global_step < rampup_length,
                     lambda: tf.constant(0.0),
                     lambda: tf.constant(1.0))
    return tf.identity(result, name="step_rampup")
session.run(tf.global_variables_initilizer())

self.global_step is to be incremented by 1 by optimizer at each iteration. It's value has to change. So, that is the behavior I want.

Error message:

FailedPreconditionError ...
506         with tf.Session(graph=highgraph) as session:
--> 507             session.run(tf.global_variables_initializer())
...
FailedPreconditionError: Attempting to use uninitialized value global_step
 [[node global_step/read (defined at NML_U/sNeural.py:103)  = Identity[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](global_step)]]

Why is that part of the code is culprit? Because, The following code works

def __init__(...):
    ...
    self.global_step = tf.get_variable(initializer=tf.zeros_initializer(), trainable=False, shape=(), name='global_step')
    ...
    step_rampup_value = self.step_rampup(self.global_step, self.rampup_length)

def step_rampup(self, global_step, rampup_length):
    result = tf.cond(global_step.initialized_value() < rampup_length,
                     lambda: tf.constant(0.0),
                     lambda: tf.constant(1.0))
    return tf.identity(result, name="step_rampup")
session.run(tf.global_variables_initilizer())

but that will evaluate the conditional with the initialized value of self.global_step(=0) each time which is not the intended behavior

Also,

This code works as well:

def __init__(...):
    ...
    self.global_step = tf.get_variable(initializer=tf.zeros_initializer(), trainable=False, shape=(), name='global_step')
    self.global_step = tf.assign(self.global_step,0.)
    ...
    step_rampup_value = self.step_rampup(self.global_step, self.rampup_length)

def step_rampup(self, global_step, rampup_length):
    result = tf.cond(global_step < rampup_length,
                     lambda: tf.constant(0.0),
                     lambda: tf.constant(1.0))
    return tf.identity(result, name="step_rampup")
session.run(tf.global_variables_initilizer())

But (maybe) this will again not lead to the dependency on global_step but instead on assign op which will keep assigning 0 to self.global_step

How do I go about achieving the behavior?

Upvotes: 2

Views: 126

Answers (1)

P-Gn
P-Gn

Reputation: 24581

You did not provide the full code, so I can only guess that you are perhaps calling tf.global_variables_initializer() before __init__(). Indeed the former will not initialize variables that are created after it has been called.

Upvotes: 1

Related Questions