Ran Feldesh
Ran Feldesh

Reputation: 1169

Serving a Keras model with Tensorflow Serving

Tensorflow 1.12 release notes states that: "Keras models can now be directly exported to the SavedModel format(tf.contrib.saved_model.save_keras_model()) and used with Tensorflow Serving". So I gave it a shot -

I have exported a simple model with this op using a single line. However, Tensorflow serving doesn't recognize the model. I guess the problem is with the docker call, and maybe with a missing 'signature_defs' in the model definition. I would be thankful for info regarding the missing steps.

1. Training and exporting the model to TF serving:

Here is the code based on Jason Brownlee's first NN (chosen thanks to its simplicity)

(the training data, as a short CSV file, is here):

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.contrib.saved_model import save_keras_model
import numpy

# fix random seed for reproducibility
numpy.random.seed(7)

# load pima indians dataset
dataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]

# create model
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))

# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

# Fit the model
model.fit(X, Y, epochs=150, batch_size=10)

# evaluate the model
scores = model.evaluate(X, Y)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))

# calculate predictions
predictions = model.predict(X)
# round predictions
rounded = [round(x[0]) for x in predictions]
print(rounded)

# Save the model for serving
path = '/TensorFlow_Models/Keras_serving/saved_model' # full path of where to save the model
save_keras_model(model, path)

2. Setting up Tensorflow Server:

The server can be set via docker or by its own build. TF recommends docker (TF ref). Following this, and based on TF blog and TF Serving Tutorial:

  1. Install Docker (from here)
  2. Get the latest TF serving version:

docker pull tensorflow/serving

  1. Activate TF serving with this model (TF ref):

docker run -p 8501:8501 --name NNN --mount type=bind,source=SSS,target=TTT -e MODEL_NAME=MMM -t tensorflow/serving &

I would be happy if one could confirm:

3. the client

The server can get requests either over gRPC or RESTful API. Assuming we go with RESTful API, the model can be accessed by using curl (here is a TF example). But how do we set the input/output of the model? does SignatureDefs needed (ref)?

All in all, while "Keras models can now be directly exported to the SavedModel format(tf.contrib.saved_model.save_keras_model()) and used with Tensorflow Serving", as stated in TF1.12 release notes, there is a way to go in order to actually serve the model. I would be happy for ideas on completing this.

Upvotes: 3

Views: 2076

Answers (2)

Yiding
Yiding

Reputation: 91

You are all correct about NNN and SSS. NNN can be arbitrary, if not specified, docker will give it a random name.

For MMM, better give it a meaningful name.

For TTT this is general about docker run command, and you can refer docker doc. This is where you map(bind) SSS inside the container, usually set to /models/$MODEL_NAME. If you get into this container and open /models/$MODEL_NAME, you will see the version folder(s) just as in SSS.

The input of RESTful API is the same as the input to the model in TensorFlow code, in your example is X = dataset[:,0:8].

If you didn't define signature when saving the model like the example in doc, then it's not necessary in serving.

Upvotes: 2

Manuel Jan
Manuel Jan

Reputation: 85

Thanks for your question, more or less is linked with mine tensorflow-serving signature for an XOR

I add exactly your doubts about the TTT

Upvotes: 0

Related Questions