DocDriven
DocDriven

Reputation: 3974

Requests to TensorFlow serving's predict API returns error "Missing inputs"

I have trained a simple regression model to fit a linear function with the following equation: y = 3x + 1. For testing purposes, I saved the model as checkpoints, so that I could resume training and wouldn't have to start from scratch every time.

Now I want to make this model available via TF serving. For this reason, I had to convert it into the SavedModel format of tensorflow via this script:

import tensorflow as tf
import restoretest as rt  ## just the module that contains the linear model

tf.reset_default_graph()        

latest_checkpoint = tf.train.latest_checkpoint('path/to/checkpoints')
model = rt.LinearModel()
saver = tf.train.Saver()

export_path = 'path/to/export/folder'

with tf.Session() as sess:

    if latest_checkpoint:
        saver.restore(sess, latest_checkpoint)
    else:
        raise ValueError('No checkpoint file found') 

    print('Exporting trained model to', export_path)

    builder = tf.saved_model.builder.SavedModelBuilder(export_path)

    ## define inputs and outputs

    tensor_info_x = tf.saved_model.utils.build_tensor_info(model.x)
    tensor_info_y = tf.saved_model.utils.build_tensor_info(model.y_pred)

    prediction_signature = (
            tf.saved_model.signature_def_utils.build_signature_def(
                    inputs={'xvals': tensor_info_x},
                    outputs={'yvals': tensor_info_y},
                    method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))


    builder.add_meta_graph_and_variables(sess, 
                                         [tf.saved_model.tag_constants.SERVING],
                                         signature_def_map={tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: prediction_signature},
                                         main_op=tf.tables_initializer(),
                                         strip_default_attrs=True)

    builder.save()

    print('Done exporting')

This creates a folder (as expected) with the contents:

export_folder
    |-saved_model.pb
    |-variables
        |-variables.index
        |-variables.data-00000-of-00001

To serve this with tf serving and docker, I pulled the tensorflow/serving image from docker and ran the container via the command:

sudo docker run -p 8501:8501 --mount type=bind,source=path/to/export/folder,target=models/linear -e MODEL_NAME=linear -t tensorflow/serving

This seems to execute without problems, as I get a lot of infos. In the last line of the output it says

[evhttp_server.cc : 237] RAW: Entering the event loop ...

I guess the server is waiting for requests. Now, when I try to send a request to it via curl, I get an error:

curl -d '{"xvals": [1.0 2.0 5.0]}' -X POST http://localhost:8501/v1/models/linear:predict

{ "error": "Missing \'inputs\' or \'instances\' key" }

What am I doing wrong? The model works when I send dummy values via the saved_model_cli.

Upvotes: 0

Views: 1746

Answers (1)

Vlad-HC
Vlad-HC

Reputation: 4757

Looks like the body of the POST request should be modified. According to documentation the format should be

{ "inputs": {"xvals": [1.0 2.0 5.0]} }

Upvotes: 2

Related Questions