Reputation: 4547
I'm using tf-slim
to finetune a network, vgg16
. I'd like to manually manipulate the gradients by applying a different learning rate to the last layer. But when I try to use opt.minimize()
, or tf.gradients()
and opt.apply_gradients()
I get a None
value for the loss in the summary reporting
Why does this code path for train_op
work:
optimizer = tf.train.GradientDescentOptimizer( learning_rate=.001 )
train_op = slim.learning.create_train_op(total_loss, optimizer,
global_step=global_step)
slim.learning.train(train_op, log_dir,
init_fn=init_fn,
global_step=global_step,
number_of_steps=25,
save_summaries_secs=300,
save_interval_secs=600
)
But manually creating the train_op
fails with exception below (e.g. total_loss
is None
):
trainable = tf.trainable_variables()
optimizer = tf.train.GradientDescentOptimizer(learning_rate=.001)
train_op = optimizer.minimize( total_loss, global_step=global_step )
# exception: appears that loss is None
--- Logging error ---
Traceback (most recent call last):
...
File "/anaconda/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/slim/python/slim/learning.py", line 755, in train
sess, train_op, global_step, train_step_kwargs)
File "/anaconda/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/slim/python/slim/learning.py", line 506, in train_step
np_global_step, total_loss, time_elapsed)
File "/anaconda/anaconda3/lib/python3.6/logging/__init__.py", line 338, in getMessage
msg = msg % self.args
TypeError: must be real number, not NoneType
...
Message: 'global step %d: loss = %.4f (%.3f sec/step)'
Arguments: (29, None, 51.91366386413574)
what am I doing wrong here?
Upvotes: 2
Views: 1225
Reputation: 4547
My use case is to apply a different learning_rate
to the last, finetuning layer of my model--which seemed to suggest I had to use a 2nd optimizer.
Under the assumption that sticking with the framework will pay off later, this is what I had to do to cobble together an equivalent function for tf.slim.create_train_op()
that accepts multiple optimizers
and grads_and_vars
.
def slim_learning_create_train_op_with_manual_grads( total_loss, optimizers, grads_and_vars,
global_step=0,
# update_ops=None,
# variables_to_train=None,
clip_gradient_norm=0,
summarize_gradients=False,
gate_gradients=1, # tf.python.training.optimizer.Optimizer.GATE_OP,
aggregation_method=None,
colocate_gradients_with_ops=False,
gradient_multipliers=None,
check_numerics=True):
"""Runs the training loop
modified from slim.learning.create_train_op() to work with
a matched list of optimizers and grads_and_vars
Returns:
train_ops - the value of the loss function after training.
"""
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.training import training_util
def transform_grads_fn(grads):
if gradient_multipliers:
with ops.name_scope('multiply_grads'):
grads = multiply_gradients(grads, gradient_multipliers)
# Clip gradients.
if clip_gradient_norm > 0:
with ops.name_scope('clip_grads'):
grads = clip_gradient_norms(grads, clip_gradient_norm)
return grads
if global_step is None:
global_step = training_util.get_or_create_global_step()
assert len(optimizers)==len(grads_and_vars)
### order of processing:
# 0. grads = opt.compute_gradients()
# 1. grads = transform_grads_fn(grads)
# 2. add_gradients_summaries(grads)
# 3. grads = opt.apply_gradients(grads, global_step=global_step)
grad_updates = []
for i in range(len(optimizers)):
grads = grads_and_vars[i] # 0. kvarg, from opt.compute_gradients()
grads = transform_grads_fn(grads) # 1. transform_grads_fn()
if summarize_gradients:
with ops.name_scope('summarize_grads'):
slim.learning.add_gradients_summaries(grads) # 2. add_gradients_summaries()
if i==0:
grad_update = optimizers[i].apply_gradients( grads, # 3. optimizer.apply_gradients()
global_step=global_step) # update global_step only once
else:
grad_update = optimizers[i].apply_gradients( grads )
grad_updates.append(grad_update)
with ops.name_scope('train_op'):
total_loss = array_ops.check_numerics(total_loss,
'LossTensor is inf or nan')
train_op = control_flow_ops.with_dependencies(grad_updates, total_loss)
# Add the operation used for training to the 'train_op' collection
train_ops = ops.get_collection_ref(ops.GraphKeys.TRAIN_OP)
if train_op not in train_ops:
train_ops.append(train_op)
return train_op
Upvotes: 1
Reputation: 1559
The issue is that, despite the name create_train_op()
, slim
creates a different return type than the usual definition of train_op
, which is what you have used in the second case when you use the "non-slim" call:
optimizer.minimize( total_loss, global_step=global_step )
Try for example this:
optimizer = tf.train.GradientDescentOptimizer( learning_rate=.001 )
train_op_no_slim = optimizer.minimize(total_loss)
train_op = slim.learning.create_train_op(total_loss, optimizer)
print(train_op_no_slim)
print(train_op)
For the first, I get the "usual" (in tensorflow):
name: "GradientDescent_2"
op: "NoOp"
input: "^GradientDescent_2/update_layer1/weight1/ApplyGradientDescent"
input: "^GradientDescent_2/update_layer1/bias1/ApplyGradientDescent"
input: "^GradientDescent_2/update_layer2/weight2/ApplyGradientDescent"
input: "^GradientDescent_2/update_layer2/bias2/ApplyGradientDescent"
input: "^GradientDescent_2/update_layer3/weight3/ApplyGradientDescent"
input: "^GradientDescent_2/update_layer3/bias3/ApplyGradientDescent"
For the second print
statement, I get:
Tensor("train_op_1/control_dependency:0", shape=(), dtype=float32)
In short, slim.learning.create_train_op
does not have the same return type as optimizer.minimize()
.
To fix this: your use of a directly defined train_op
is taking you out of standard slim
territory. I suggest embracing that and just operating on the directly defined train_op
in the non-slim fashion, using sess.run()
or train_op.run()
as in a typical (non-slim) tensorflow example.
Upvotes: 1