lex2763
lex2763

Reputation: 113

TensorFlow simple example help - custom gradient

How do you pass a custom gradient into a gradient optimization function in TensorFlow.

I have illustrated what I am trying to do, with a simple example (trying to minimize z = 2x^2 + y^2 + 2).

I have been looking at: https://www.tensorflow.org/api_docs/python/tf/train/Optimizer

The problem seems to work if you pass in optimizer = tf.train.GradientDescentOptimizer(0.55) and train = optimizer.minimize(z)

This code works:

import tensorflow as tf

x = tf.Variable(11, name='x', dtype=tf.float32)
y = tf.Variable(11, name='x', dtype=tf.float32)
const = tf.constant(2.0, dtype=tf.float32)

z = x**2 + y**2 + const


optimizer = tf.train.GradientDescentOptimizer(0.55)
train = optimizer.minimize(z)

init = tf.global_variables_initializer()

def optimize():
  with tf.Session() as session:
    session.run(init)
    print("starting at", "x:", session.run(x), "y:", session.run(y), "z:", session.run(z))
    for step in range(10):  
      session.run(train)
      print("step", step, "x:", session.run(x), "y:", session.run(y), "z:", session.run(z))


optimize()

But I want to specify the gradient in the problem. aka I am trying to do this:

def function_to_minimize(x,y, const):
    # z = 2x^2 + y^2 + constant
    z = 2*x**2 + y**2 + const
    return z

def calc_grad(x,y):
    # z = 2x^2 + y^2 + constant
    dz_dx = 4*x
    dz_dy = 2*y
    return [(dz_dx, x), (dz_dy, y)]

x = tf.Variable(3, name='x', dtype=tf.float32)
y = tf.Variable(3, name='y', dtype=tf.float32)
const = tf.constant(2.0, dtype=tf.float32)


z = function_to_minimize(x,y, const)
grad = calc_grad(x,y)


init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
print(sess.run(z))
print(sess.run(grad))


optimizer = tf.train.GradientDescentOptimizer(0.5)

grads_and_vars = calc_grad(x,y)

optimizer.apply_gradients(grads_and_vars)

# minimize() takes care of both computing the gradients and applying them to the variables.
#If you want to process the gradients before applying them you can instead use the optimizer in three steps:
#     1. Compute the gradients with compute_gradients().
#     2. Process the gradients as you wish.
#     3. Apply the processed gradients with apply_gradients()

How do you do this properly?

Upvotes: 1

Views: 245

Answers (1)

BlackBear
BlackBear

Reputation: 22989

apply_gradients returns an operation that you can use to apply the gradients. In other words, you just do train = optimizer.apply_gradients(grads_and_vars) and the rest will work as in the first snippet. I,e.:

optimizer = tf.train.GradientDescentOptimizer(0.55)
grads_and_vars = calc_grad(x,y)
train = optimizer.apply_gradients(grads_and_vars)

init = tf.global_variables_initializer()

def optimize():
  with tf.Session() as session:
    session.run(init)
    print("starting at", "x:", session.run(x), "y:", session.run(y), "z:", session.run(z))
    for step in range(10):  
      session.run(train)
      print("step", step, "x:", session.run(x), "y:", session.run(y), "z:", session.run(z))


optimize()

Upvotes: 1

Related Questions