AlgoTactica
AlgoTactica

Reputation: 11

Keras/TensorFlow error while running as Pyro4 Server

I have a client-server design using Pyro4, in which the client code is as follows:

import Pyro4
uri         =   'PYRO:[email protected]:10000
test_1      =   Pyro4.Proxy(uri)
test_1.run_model()

The server-side code is as follows:

import Pyro4
import socket
from keras.models import Sequential
from keras.layers import LSTM
import tensorflow as tf

@Pyro4.expose

class PyroServer(object):

    def run_model(self):
        session     =   tf.Session()
        session.run(tf.global_variables_initializer())
        session.run(tf.local_variables_initializer())
        session.run(tf.tables_initializer())
        session.run(tf.variables_initializer([]))
        tf.reset_default_graph()
        model = Sequential()
        model.add(LSTM(25, input_shape=(5, 10)))

host_name   =   socket.gethostbyname(socket.getfqdn())
daemon      =   Pyro4.Daemon(host = host_name,port = 10000)
uri         =   daemon.register(PyroServer,objectId = 'PYRO_SERVER')
daemon.requestLoop()

After the server is started, the first call from the client to the run_model() method functions properly. For the second, and all subsequent calls, the following error message is displayed:

File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/Pyro4/core.py", line 187, in call
return self.__send(self.__name, args, kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/Pyro4/core.py", line 472, in _pyroInvoke
raise data # if you see this in your traceback, you should probably inspect the remote traceback as well
ValueError: Fetch argument cannot be interpreted as a Tensor. (Operation name: "lstm_1/init"
op: "NoOp"
input: "^lstm_1/kernel/Assign"
input: "^lstm_1/recurrent_kernel/Assign"
input: "^lstm_1/bias/Assign"
is not an element of this graph.)

Can anyone suggest a possible solution for this?

Upvotes: 1

Views: 202

Answers (2)

Pascal Louis-Marie
Pascal Louis-Marie

Reputation: 252

Hey I use that code and that works great for me.

$cat greeting-server.py

import Pyro4
import tensorflow as tf

@Pyro4.expose
class GreetingMaker(object):
    def get_fortune(self, name):
        var = tf.constant('Hello, TensorFlow!')
        sess = tf.Session()
        return "Hello, {0}. Here is your greeting message:\n" \
               "{1}".format(name,sess.run(var))

daemon = Pyro4.Daemon()                # make a Pyro daemon
uri = daemon.register(GreetingMaker)   # register the greeting maker as a Pyro object

print("Ready. Object uri =", uri)      # print the uri so we can use it in the client later
daemon.requestLoop()                   # start the event loop of the server to wait for calls

$ cat greeting-client.py

import Pyro4

uri = input("What is the Pyro uri of the greeting object? ").strip()
name = input("What is your name? ").strip()

greeting_maker = Pyro4.Proxy(uri)         # get a Pyro proxy to the greeting object
print(greeting_maker.get_fortune(name))   # call method normally

$ python greeting-server.py &
[1] 2965
Ready. Object uri = PYRO:obj_a751da78da6a4feca49f18ab664cc366@localhost:53025

$python greeting-client.py
What is the Pyro uri of the greeting object?

PYRO:obj_a751da78da6a4feca49f18ab664cc366@localhost:53025


What is your name?

Plm

2018-03-06 16:20:32.271647: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2018-03-06 16:20:32.271673: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2018-03-06 16:20:32.271678: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2018-03-06 16:20:32.271682: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2018-03-06 16:20:32.271686: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
Hello, Plm. Here is your greeting message:
b'Hello, TensorFlow!'

And as you can see, if get connected to same url again, then it works without the TF initialization time, since it was done during first call already. Then persistency is maintained across 2 separate calls, as long as you call same Pyro url, obviously.

$ python greeting-client.py
What is the Pyro uri of the greeting object?

PYRO:obj_a751da78da6a4feca49f18ab664cc366@localhost:53025


What is your name?

Plm2


Hello, Plm2. Here is your greeting message:
b'Hello, TensorFlow!'

Upvotes: -1

Irmen de Jong
Irmen de Jong

Reputation: 2847

I'm not familiar with Tensorflow, but the actual error is this:

ValueError: Fetch argument cannot be interpreted as a Tensor.

Simplify your code and make it run stand-alone correctly first, only then wrap it in a Pyro service.

Upvotes: 0

Related Questions