Adam
Adam

Reputation: 610

How to create a tensorflow serving client for the 'wide and deep' model?

I've created a model based on the 'wide and deep' example (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/wide_n_deep_tutorial.py).

I've exported the model as follows:

  m = build_estimator(model_dir)
  m.fit(input_fn=lambda: input_fn(df_train, True), steps=FLAGS.train_steps)
  results = m.evaluate(input_fn=lambda: input_fn(df_test, True), steps=1)

  print('Model statistics:')

  for key in sorted(results):
    print("%s: %s" % (key, results[key]))

  print('Done training!!!')

  # Export model
  export_path = sys.argv[-1]
  print('Exporting trained model to %s' % export_path)

  m.export(
   export_path,
   input_fn=serving_input_fn,
   use_deprecated_input_fn=False,
   input_feature_key=INPUT_FEATURE_KEY

My question is, how do I create a client to make predictions from this exported model? Also, have I exported the model correctly?

Ultimately I need to be able do this in Java too. I suspect I can do this by creating Java classes from proto files using gRPC.

Documentation is very sketchy, hence why I am asking on here.

Many thanks!

Upvotes: 11

Views: 1813

Answers (2)

MtDersvan
MtDersvan

Reputation: 552

I wrote a simple tutorial Exporting and Serving a TensorFlow Wide & Deep Model.

TL;DR

To export an estimator there are four steps:

  1. Define features for export as a list of all features used during estimator initialization.

  2. Create a feature config using create_feature_spec_for_parsing.

  3. Build a serving_input_fn suitable for use in serving using input_fn_utils.build_parsing_serving_input_fn.

  4. Export the model using export_savedmodel().

To run a client script properly you need to do three following steps:

  1. Create and place your script somewhere in the /serving/ folder, e.g. /serving/tensorflow_serving/example/

  2. Create or modify corresponding BUILD file by adding a py_binary.

  3. Build and run a model server, e.g. tensorflow_model_server.

  4. Create, build and run a client that sends a tf.Example to our tensorflow_model_server for the inference.

For more details look at the tutorial itself.

Upvotes: 2

Aviv Goldgeier
Aviv Goldgeier

Reputation: 809

Just spent a solid week figuring this out. First off, m.export is going to deprecated in a couple weeks, so instead of that block, use: m.export_savedmodel(export_path, input_fn=serving_input_fn).

Which means you then have to define serving_input_fn(), which of course is supposed to have a different signature than the input_fn() defined in the wide and deep tutorial. Namely, moving forward, I guess it's recommended that input_fn()-type things are supposed to return an InputFnOps object, defined here.

Here's how I figured out how to make that work:

from tensorflow.contrib.learn.python.learn.utils import input_fn_utils
from tensorflow.python.ops import array_ops
from tensorflow.python.framework import dtypes

def serving_input_fn():
  features, labels = input_fn()
  features["examples"] = tf.placeholder(tf.string)

  serialized_tf_example = array_ops.placeholder(dtype=dtypes.string,
                                                shape=[None],
                                                name='input_example_tensor')
  inputs = {'examples': serialized_tf_example}
  labels = None  # these are not known in serving!
  return input_fn_utils.InputFnOps(features, labels, inputs)

This is probably not 100% idiomatic, but I'm pretty sure it works. For now.

Upvotes: 1

Related Questions