LJKS
LJKS

Reputation: 919

Manually changing learning_rate in tf.train.AdamOptimizer

The question is, whether just changing the learning_rate argument in tf.train.AdamOptimizer actually results in any changes in behaviour: Let's say the code looks like this:

myLearnRate = 0.001
...
output = tf.someDataFlowGraph
trainLoss = tf.losses.someLoss(output)
trainStep = tf.train.AdamOptimizer(learning_rate=myLearnRate).minimize(trainLoss)
with tf.Session() as session:
    #first trainstep
    session.run(trainStep, feed_dict = {input:someData, target:someTarget})
    myLearnRate = myLearnRate * 0.1
    #second trainstep
    session.run(trainStep, feed_dict = {input:someData, target:someTarget})

Would the decreased myLearnRate now be applied in the second trainStep? This is, is the creation of the node trainStep only evaluated once:

trainStep = tf.train.AdamOptimizer(learning_rate=myLearnRate).minimize(trainLoss)

Or is it evaluated with every session.run(train_step)? How could I have checked in my AdamOptimizer in Tensorflow, whether it did change the Learnrate.

Disclaimer 1: I'm aware manually changing the LearnRate is bad practice. Disclaimer 2: I'm aware there is a similar question, but it was solved with inputting a tensor as learnRate, which is updated in every trainStep (here). It makes me lean towards assuming it would only work with a tensor as input for the learning_rate in AdamOptimizer, but neither am I sure of that, nor can I understand the reasoning behind it.

Upvotes: 9

Views: 5899

Answers (2)

Engineero
Engineero

Reputation: 12908

The short answer is that no, your new learning rate is not applied. TF builds the graph when you first run it, and changing something on the Python side will not translate to a change in the graph at run time. You can, however, feed a new learning rate into your graph pretty easily:

# Use a placeholder in the graph for your user-defined learning rate instead
learning_rate = tf.placeholder(tf.float32)
# ...
trainStep = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(trainLoss)
applied_rate = 0.001  # we will update this every training step
with tf.Session() as session:
    #first trainstep, feeding our applied rate to the graph
    session.run(trainStep, feed_dict = {input: someData,
                                        target: someTarget,
                                        learning_rate: applied_rate})
    applied_rate *= 0.1  # update the rate we feed to the graph
    #second trainstep
    session.run(trainStep, feed_dict = {input: someData,
                                        target: someTarget,
                                        learning_rate: applied_rate})

Upvotes: 10

Maxim
Maxim

Reputation: 53758

Yes, the optimizer is created only once:

tf.train.AdamOptimizer(learning_rate=myLearnRate)

It remembers the passed learning rate (in fact, it creates a tensor for it, if you pass a floating number) and your future changes of myLearnRate don't affect it.

Yes, you can create a placeholder and pass it to the session.run(), if you really want to. But, as you said, it's pretty uncommon and probably means you are solving your origin problem in the wrong way.

Upvotes: 6

Related Questions