Ray.R.Chua
Ray.R.Chua

Reputation: 775

Tensorflow: Writing an Op in Python

I would like to write an Op in Python. This tutorial only explains how to do it in c++ with a Python wrapper. https://www.tensorflow.org/versions/master/how_tos/adding_an_op/index.html#adding-a-new-op

How can I write it completely in Python?

Upvotes: 19

Views: 9644

Answers (3)

Jackson Loper
Jackson Loper

Reputation: 535

In my experience, the main reason to write a new Op without using C is that you want to implement a custom gradient. If that is why you want to make an Op, you can now do that with tf.custom_gradient

Upvotes: 1

Olivier Moindrot
Olivier Moindrot

Reputation: 28218

You can use tf.py_func(func, inp, Tout).

Wraps a python function and uses it as a tensorflow op.

Given a python function func, which takes numpy arrays as its inputs and returns numpy arrays as its outputs.


Your python function needs to have:

  • numpy arrays as inputs, fed from the graph with the argument inp
  • numpy arrays as outputs, you need to specify their types to TensorFlow in the argument Tout

Inside the function, you can do whatever you like, if conditions of for loops, anything that is not possible in TensorFlow.


However, the operation will be executed on CPU so it may be slower than the equivalent TensorFlow op in GPU.

Upvotes: 20

Qifeng Chen
Qifeng Chen

Reputation: 99

You can use tf.py_func to call python functions. The operations inside the function can also be on GPU. For example, we can add an Op and its gradient purely in python which calls Caffe on GPU:

def custom_loss_impl(x):
    caffe.set_mode_gpu()
    caffe.set_device(0)
    ...
    return np.float32(loss)

def custom_loss(x):
    tf.RegisterGradient("custom_loss_grad")(custom_loss_grad)
    g=tf.get_default_graph()
    with g.gradient_override_map({"PyFunc":"custom_loss_grad"}):
        return tf.py_func(custom_loss_impl,[x],[tf.float32])[0]

def custom_loss_grad_impl(x):
    caffe.set_mode_gpu()
    caffe.set_device(0)
    custom_loss_impl(x)
    ...
    return np.float32(gradient)

def custom_loss_grad(op,grad):
    x=op.inputs[0]
    return tf.py_func(custom_loss_grad_impl,[x],[tf.float32])#assume grad=1

Upvotes: 9

Related Questions